Oct 07 12:27:40 crc systemd[1]: Starting Kubernetes Kubelet... Oct 07 12:27:40 crc restorecon[4736]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:40 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 07 12:27:41 crc restorecon[4736]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 07 12:27:42 crc kubenswrapper[5024]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.507547 5024 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511861 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511889 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511897 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511904 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511911 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511917 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511923 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511928 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511934 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511941 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511946 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511952 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511959 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511964 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511969 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511975 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511990 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.511995 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512000 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512005 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512010 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512015 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512020 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512025 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512031 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512036 5024 feature_gate.go:330] unrecognized feature gate: Example Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512041 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512046 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512052 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512059 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512099 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512108 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512115 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512124 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512131 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512155 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512160 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512166 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512170 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512178 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512186 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512192 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512198 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512205 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512212 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512218 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512224 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512229 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512234 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512241 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512248 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512254 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512261 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512267 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512273 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512279 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512284 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512290 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512296 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512302 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512307 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512312 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512318 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512323 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512328 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512334 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512339 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512345 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512352 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512357 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.512362 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513395 5024 flags.go:64] FLAG: --address="0.0.0.0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513418 5024 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513434 5024 flags.go:64] FLAG: --anonymous-auth="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513444 5024 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513454 5024 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513462 5024 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513473 5024 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513482 5024 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513490 5024 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513498 5024 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513506 5024 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513514 5024 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513521 5024 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513529 5024 flags.go:64] FLAG: --cgroup-root="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513537 5024 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513544 5024 flags.go:64] FLAG: --client-ca-file="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513552 5024 flags.go:64] FLAG: --cloud-config="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513560 5024 flags.go:64] FLAG: --cloud-provider="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513568 5024 flags.go:64] FLAG: --cluster-dns="[]" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513577 5024 flags.go:64] FLAG: --cluster-domain="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513585 5024 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513593 5024 flags.go:64] FLAG: --config-dir="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513601 5024 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513609 5024 flags.go:64] FLAG: --container-log-max-files="5" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513620 5024 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513628 5024 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513637 5024 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513645 5024 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513652 5024 flags.go:64] FLAG: --contention-profiling="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513660 5024 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513668 5024 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513676 5024 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513685 5024 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513695 5024 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513703 5024 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513710 5024 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513718 5024 flags.go:64] FLAG: --enable-load-reader="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513726 5024 flags.go:64] FLAG: --enable-server="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.513733 5024 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515121 5024 flags.go:64] FLAG: --event-burst="100" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515156 5024 flags.go:64] FLAG: --event-qps="50" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515166 5024 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515174 5024 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515182 5024 flags.go:64] FLAG: --eviction-hard="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515192 5024 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515200 5024 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515208 5024 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515216 5024 flags.go:64] FLAG: --eviction-soft="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515225 5024 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515233 5024 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515242 5024 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515250 5024 flags.go:64] FLAG: --experimental-mounter-path="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515257 5024 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515265 5024 flags.go:64] FLAG: --fail-swap-on="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515273 5024 flags.go:64] FLAG: --feature-gates="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515282 5024 flags.go:64] FLAG: --file-check-frequency="20s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515290 5024 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515299 5024 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515308 5024 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515316 5024 flags.go:64] FLAG: --healthz-port="10248" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515327 5024 flags.go:64] FLAG: --help="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515335 5024 flags.go:64] FLAG: --hostname-override="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515343 5024 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515352 5024 flags.go:64] FLAG: --http-check-frequency="20s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515361 5024 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515368 5024 flags.go:64] FLAG: --image-credential-provider-config="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515376 5024 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515384 5024 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515394 5024 flags.go:64] FLAG: --image-service-endpoint="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515402 5024 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515410 5024 flags.go:64] FLAG: --kube-api-burst="100" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515418 5024 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515427 5024 flags.go:64] FLAG: --kube-api-qps="50" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515434 5024 flags.go:64] FLAG: --kube-reserved="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515442 5024 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515450 5024 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515459 5024 flags.go:64] FLAG: --kubelet-cgroups="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515467 5024 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515475 5024 flags.go:64] FLAG: --lock-file="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515484 5024 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515492 5024 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515500 5024 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515513 5024 flags.go:64] FLAG: --log-json-split-stream="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515521 5024 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515529 5024 flags.go:64] FLAG: --log-text-split-stream="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515536 5024 flags.go:64] FLAG: --logging-format="text" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515544 5024 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515553 5024 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515561 5024 flags.go:64] FLAG: --manifest-url="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515568 5024 flags.go:64] FLAG: --manifest-url-header="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515579 5024 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515587 5024 flags.go:64] FLAG: --max-open-files="1000000" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515597 5024 flags.go:64] FLAG: --max-pods="110" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515604 5024 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515612 5024 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515620 5024 flags.go:64] FLAG: --memory-manager-policy="None" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515628 5024 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515636 5024 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515644 5024 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515652 5024 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515675 5024 flags.go:64] FLAG: --node-status-max-images="50" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515683 5024 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515691 5024 flags.go:64] FLAG: --oom-score-adj="-999" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515698 5024 flags.go:64] FLAG: --pod-cidr="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515707 5024 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515722 5024 flags.go:64] FLAG: --pod-manifest-path="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515729 5024 flags.go:64] FLAG: --pod-max-pids="-1" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515738 5024 flags.go:64] FLAG: --pods-per-core="0" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515746 5024 flags.go:64] FLAG: --port="10250" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515754 5024 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515762 5024 flags.go:64] FLAG: --provider-id="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515770 5024 flags.go:64] FLAG: --qos-reserved="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515778 5024 flags.go:64] FLAG: --read-only-port="10255" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515786 5024 flags.go:64] FLAG: --register-node="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515793 5024 flags.go:64] FLAG: --register-schedulable="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515801 5024 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515825 5024 flags.go:64] FLAG: --registry-burst="10" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515832 5024 flags.go:64] FLAG: --registry-qps="5" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515840 5024 flags.go:64] FLAG: --reserved-cpus="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515848 5024 flags.go:64] FLAG: --reserved-memory="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515858 5024 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515866 5024 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515874 5024 flags.go:64] FLAG: --rotate-certificates="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515882 5024 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515889 5024 flags.go:64] FLAG: --runonce="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515897 5024 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515905 5024 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515913 5024 flags.go:64] FLAG: --seccomp-default="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515921 5024 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515928 5024 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515936 5024 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515944 5024 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515952 5024 flags.go:64] FLAG: --storage-driver-password="root" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515960 5024 flags.go:64] FLAG: --storage-driver-secure="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515967 5024 flags.go:64] FLAG: --storage-driver-table="stats" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515975 5024 flags.go:64] FLAG: --storage-driver-user="root" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515983 5024 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515991 5024 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.515999 5024 flags.go:64] FLAG: --system-cgroups="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516008 5024 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516023 5024 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516031 5024 flags.go:64] FLAG: --tls-cert-file="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516039 5024 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516048 5024 flags.go:64] FLAG: --tls-min-version="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516056 5024 flags.go:64] FLAG: --tls-private-key-file="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516063 5024 flags.go:64] FLAG: --topology-manager-policy="none" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516071 5024 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516078 5024 flags.go:64] FLAG: --topology-manager-scope="container" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516086 5024 flags.go:64] FLAG: --v="2" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516097 5024 flags.go:64] FLAG: --version="false" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516108 5024 flags.go:64] FLAG: --vmodule="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516117 5024 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.516125 5024 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516538 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516549 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516555 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516561 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516566 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516573 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516579 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516595 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516601 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516608 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516615 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516622 5024 feature_gate.go:330] unrecognized feature gate: Example Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516629 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516635 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516641 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516646 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516653 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516659 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516664 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516672 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516680 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516687 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516694 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516702 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516712 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516720 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516728 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516734 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516740 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516746 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516751 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516756 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516761 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516768 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516777 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516785 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516792 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516798 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516805 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516812 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516819 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516825 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516831 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516838 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516844 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516849 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516855 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516862 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516869 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516875 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516882 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516890 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516896 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516903 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516909 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516915 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516922 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516928 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516935 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516943 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516950 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516956 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516963 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516969 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516979 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516986 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.516995 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.517004 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.517012 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.517019 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.517025 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.517951 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.533627 5024 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.533731 5024 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533895 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533921 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533931 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533942 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533951 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533960 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533975 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.533991 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534014 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534027 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534038 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534049 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534060 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534073 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534085 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534095 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534104 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534117 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534173 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534184 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534194 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534202 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534211 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534221 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534232 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534240 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534248 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534256 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534263 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534271 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534279 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534288 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534295 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534303 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534314 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534322 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534330 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534338 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534346 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534354 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534362 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534370 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534379 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534388 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534397 5024 feature_gate.go:330] unrecognized feature gate: Example Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534406 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534414 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534423 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534430 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534439 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534447 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534455 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534462 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534470 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534478 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534486 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534494 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534501 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534509 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534518 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534526 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534534 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534544 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534555 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534564 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534573 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534582 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534590 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534600 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534610 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534633 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.534648 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534883 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534898 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534910 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534920 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534929 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534937 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534945 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534953 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534961 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534970 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534978 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534985 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.534992 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535000 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535008 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535015 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535023 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535030 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535038 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535047 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535056 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535064 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535072 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535079 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535088 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535097 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535106 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535113 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535121 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535128 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535170 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535177 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535185 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535193 5024 feature_gate.go:330] unrecognized feature gate: Example Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535202 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535210 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535218 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535225 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535233 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535240 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535248 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535256 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535263 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535271 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535278 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535286 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535294 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535301 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535309 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535316 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535326 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535336 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535345 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535353 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535362 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535371 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535380 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535388 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535397 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535407 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535422 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535442 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535453 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535468 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535479 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535532 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535544 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535554 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535567 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535578 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.535591 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.535605 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.535954 5024 server.go:940] "Client rotation is on, will bootstrap in background" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.543517 5024 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.543719 5024 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.546277 5024 server.go:997] "Starting client certificate rotation" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.546331 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.546548 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-21 04:09:29.398359709 +0000 UTC Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.546648 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1791h41m46.851715068s for next certificate rotation Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.576551 5024 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.579410 5024 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.596217 5024 log.go:25] "Validated CRI v1 runtime API" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.631242 5024 log.go:25] "Validated CRI v1 image API" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.632925 5024 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.637647 5024 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-07-12-23-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.637689 5024 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.653268 5024 manager.go:217] Machine: {Timestamp:2025-10-07 12:27:42.650604623 +0000 UTC m=+0.726391481 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b2d72a02-4b40-4530-9891-327ad0d24531 BootID:6f479552-acfd-496b-8406-45ea4b4aa6ef Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ea:d7:b5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ea:d7:b5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:ea:96 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:7c:39 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:24:2f:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2e:47:ba Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:4d:3e:6e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:65:1c:37:c8:59 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:67:08:01:bd:50 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.653547 5024 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.653704 5024 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.655132 5024 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.655474 5024 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.655516 5024 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.656464 5024 topology_manager.go:138] "Creating topology manager with none policy" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.656488 5024 container_manager_linux.go:303] "Creating device plugin manager" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.657092 5024 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.657119 5024 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.657330 5024 state_mem.go:36] "Initialized new in-memory state store" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.657455 5024 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.661347 5024 kubelet.go:418] "Attempting to sync node with API server" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.661373 5024 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.661444 5024 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.661460 5024 kubelet.go:324] "Adding apiserver pod source" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.661476 5024 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.667445 5024 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.668998 5024 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.671178 5024 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676162 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676219 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676228 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676235 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676248 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676257 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676267 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676286 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676296 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676306 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676327 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.676335 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.676742 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.676936 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.676754 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.677680 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.679075 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.681806 5024 server.go:1280] "Started kubelet" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.683461 5024 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.683825 5024 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.683926 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:42 crc systemd[1]: Started Kubernetes Kubelet. Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.684559 5024 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696184 5024 server.go:460] "Adding debug handlers to kubelet server" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696278 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696310 5024 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696379 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:18:48.787968406 +0000 UTC Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696579 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 844h51m6.091394478s for next certificate rotation Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696625 5024 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696646 5024 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.696624 5024 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.696713 5024 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.697353 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.697426 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.699108 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.700631 5024 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.700657 5024 factory.go:55] Registering systemd factory Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.700666 5024 factory.go:221] Registration of the systemd container factory successfully Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.701070 5024 factory.go:153] Registering CRI-O factory Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.701097 5024 factory.go:221] Registration of the crio container factory successfully Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.701123 5024 factory.go:103] Registering Raw factory Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.701158 5024 manager.go:1196] Started watching for new ooms in manager Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.701831 5024 manager.go:319] Starting recovery of all containers Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.701964 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186c353685a5acef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-07 12:27:42.681754863 +0000 UTC m=+0.757541701,LastTimestamp:2025-10-07 12:27:42.681754863 +0000 UTC m=+0.757541701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704370 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704420 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704435 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704446 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704460 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704471 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704483 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704495 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704510 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704523 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704531 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704541 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704550 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704564 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704574 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704604 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704617 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704629 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704641 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704652 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704662 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704672 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704681 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704696 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704707 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704717 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704730 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704739 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704751 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704760 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704769 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704778 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704787 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704799 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704809 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.704819 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706246 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706360 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706389 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706414 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706429 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706441 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706462 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706475 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706494 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706508 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706522 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706539 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706553 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706572 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706584 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706598 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706621 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706644 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706665 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706682 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706701 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706715 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706734 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706748 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706761 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706782 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706794 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706811 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.706824 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.712341 5024 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.713983 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714006 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714027 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714040 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714056 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714075 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714088 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714104 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714119 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714131 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714160 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714177 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714193 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714207 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714225 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714246 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714261 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714282 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714294 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714307 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714325 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714338 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714352 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714369 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714380 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714397 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714409 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714421 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714436 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714447 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714463 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714477 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714493 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714509 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714522 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714539 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.714618 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716236 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716309 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716344 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716363 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716380 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716393 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716409 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716425 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716442 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716459 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716478 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716490 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716504 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716517 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716531 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716546 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716560 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716573 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716589 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716603 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716619 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716634 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716650 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716664 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716679 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716693 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716707 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716720 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716734 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716750 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716763 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716777 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716791 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716806 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716820 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716836 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716850 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716864 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716878 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716891 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716904 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716919 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716936 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716949 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716964 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716978 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.716991 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717004 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717016 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717030 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717043 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717056 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717071 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717085 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717102 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717116 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717132 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717164 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717178 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717193 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717208 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717224 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717238 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717252 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717267 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717283 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717297 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717310 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717325 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717340 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717356 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717370 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717385 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717401 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717416 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717436 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717450 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717466 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717505 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717519 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717534 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717547 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717561 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717574 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717588 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717603 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717617 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717633 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717650 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717665 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717679 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717695 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717711 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717728 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717743 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717759 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717772 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717787 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717802 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717816 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717830 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717844 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717861 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717877 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717892 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717905 5024 reconstruct.go:97] "Volume reconstruction finished" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.717916 5024 reconciler.go:26] "Reconciler: start to sync state" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.731738 5024 manager.go:324] Recovery completed Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.747064 5024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.749997 5024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.750122 5024 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.750236 5024 kubelet.go:2335] "Starting kubelet main sync loop" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.750423 5024 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 07 12:27:42 crc kubenswrapper[5024]: W1007 12:27:42.751388 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.751443 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.759091 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.760777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.760820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.760831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.761760 5024 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.761782 5024 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.761805 5024 state_mem.go:36] "Initialized new in-memory state store" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.781361 5024 policy_none.go:49] "None policy: Start" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.783055 5024 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.783372 5024 state_mem.go:35] "Initializing new in-memory state store" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.797789 5024 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.836602 5024 manager.go:334] "Starting Device Plugin manager" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.836676 5024 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.836690 5024 server.go:79] "Starting device plugin registration server" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.837224 5024 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.837241 5024 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.837571 5024 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.837634 5024 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.837641 5024 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.847254 5024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.851494 5024 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.851592 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853384 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853428 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853584 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853789 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.853838 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.854445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.854477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.854490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.854689 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855098 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855151 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855625 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.855778 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856079 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856215 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856743 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856866 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.856895 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.857363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.857391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.857402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.857602 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.857733 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.858747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.900922 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920085 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920166 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920197 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920221 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920269 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920308 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920431 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920519 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920561 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920652 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.920715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.938412 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.940047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.940089 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.940102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:42 crc kubenswrapper[5024]: I1007 12:27:42.940151 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:42 crc kubenswrapper[5024]: E1007 12:27:42.940643 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022270 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022365 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022387 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022438 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022439 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022499 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022507 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022452 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022576 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022623 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022680 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022737 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022796 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022829 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022865 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022877 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022895 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022901 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022894 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022975 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022933 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022919 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.022958 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.141272 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.143728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.143811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.143830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.143880 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:43 crc kubenswrapper[5024]: E1007 12:27:43.144855 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.179651 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.185824 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.207603 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.222924 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.230823 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.231213 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-85cb5dc6f9fc54e5e258472af6908220a1ec8964f510c762e06c4cb0c0e3bd64 WatchSource:0}: Error finding container 85cb5dc6f9fc54e5e258472af6908220a1ec8964f510c762e06c4cb0c0e3bd64: Status 404 returned error can't find the container with id 85cb5dc6f9fc54e5e258472af6908220a1ec8964f510c762e06c4cb0c0e3bd64 Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.234218 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-639842d8ccda0d7ffa2976ac53e3bf5ab5b4a1d6d246f654da79400206041f19 WatchSource:0}: Error finding container 639842d8ccda0d7ffa2976ac53e3bf5ab5b4a1d6d246f654da79400206041f19: Status 404 returned error can't find the container with id 639842d8ccda0d7ffa2976ac53e3bf5ab5b4a1d6d246f654da79400206041f19 Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.240113 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c5f9d096a0a6736e162644a73639df44d8288d7a1f6ceaaf47cc6f8d2b21ff11 WatchSource:0}: Error finding container c5f9d096a0a6736e162644a73639df44d8288d7a1f6ceaaf47cc6f8d2b21ff11: Status 404 returned error can't find the container with id c5f9d096a0a6736e162644a73639df44d8288d7a1f6ceaaf47cc6f8d2b21ff11 Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.250972 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1b51cfd21f9de70b2ee3daa3913eea25b32ab6c1129404c6b6b5f124bb481e8a WatchSource:0}: Error finding container 1b51cfd21f9de70b2ee3daa3913eea25b32ab6c1129404c6b6b5f124bb481e8a: Status 404 returned error can't find the container with id 1b51cfd21f9de70b2ee3daa3913eea25b32ab6c1129404c6b6b5f124bb481e8a Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.258670 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4623891ff1c774a75aeff6d3a49f628e49f7fc1d89b57a8fdb77ccd044ac1569 WatchSource:0}: Error finding container 4623891ff1c774a75aeff6d3a49f628e49f7fc1d89b57a8fdb77ccd044ac1569: Status 404 returned error can't find the container with id 4623891ff1c774a75aeff6d3a49f628e49f7fc1d89b57a8fdb77ccd044ac1569 Oct 07 12:27:43 crc kubenswrapper[5024]: E1007 12:27:43.301480 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.545709 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.547376 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.547446 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.547464 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.547507 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:43 crc kubenswrapper[5024]: E1007 12:27:43.548210 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.695521 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.760483 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"639842d8ccda0d7ffa2976ac53e3bf5ab5b4a1d6d246f654da79400206041f19"} Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.761467 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4623891ff1c774a75aeff6d3a49f628e49f7fc1d89b57a8fdb77ccd044ac1569"} Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.762287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b51cfd21f9de70b2ee3daa3913eea25b32ab6c1129404c6b6b5f124bb481e8a"} Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.763040 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5f9d096a0a6736e162644a73639df44d8288d7a1f6ceaaf47cc6f8d2b21ff11"} Oct 07 12:27:43 crc kubenswrapper[5024]: I1007 12:27:43.763895 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85cb5dc6f9fc54e5e258472af6908220a1ec8964f510c762e06c4cb0c0e3bd64"} Oct 07 12:27:43 crc kubenswrapper[5024]: W1007 12:27:43.818184 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:43 crc kubenswrapper[5024]: E1007 12:27:43.818282 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:44 crc kubenswrapper[5024]: W1007 12:27:44.034626 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:44 crc kubenswrapper[5024]: E1007 12:27:44.035025 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:44 crc kubenswrapper[5024]: E1007 12:27:44.102944 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Oct 07 12:27:44 crc kubenswrapper[5024]: W1007 12:27:44.107585 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:44 crc kubenswrapper[5024]: E1007 12:27:44.107684 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:44 crc kubenswrapper[5024]: W1007 12:27:44.271537 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:44 crc kubenswrapper[5024]: E1007 12:27:44.271616 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.348516 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.354399 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.354434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.354442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.354466 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:44 crc kubenswrapper[5024]: E1007 12:27:44.354945 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.695743 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.769315 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.769372 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.769387 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.769397 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.769492 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.770599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.770624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.770631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.771652 5024 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37" exitCode=0 Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.771717 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.771744 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.772620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.772647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.772658 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.773948 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a" exitCode=0 Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.774007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.774109 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775033 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775633 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d" exitCode=0 Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775692 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.775734 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.776478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.776508 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.776519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.777120 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.778696 5024 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6f4d0527470b130f88e3c0e84d67de9853deaee1e26d000187ff7328edd4b3d4" exitCode=0 Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.778820 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6f4d0527470b130f88e3c0e84d67de9853deaee1e26d000187ff7328edd4b3d4"} Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.778942 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781397 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:44 crc kubenswrapper[5024]: I1007 12:27:44.781977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.695769 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:45 crc kubenswrapper[5024]: E1007 12:27:45.704432 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.711510 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.784701 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c" exitCode=0 Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.784780 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.784970 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.786579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.786612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.786624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.788111 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.788115 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"670135e274ef583cd074f1d5f07b59626278cba64f32273fe44b3ee8e8767918"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.788986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.789056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.789066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.791290 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.791357 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.791373 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.791415 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.793856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.793915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.793929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.809029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.809094 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.809111 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.809122 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633"} Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.809110 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.811176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.811215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.811225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.955578 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.956976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.957007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.957018 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:45 crc kubenswrapper[5024]: I1007 12:27:45.957040 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:45 crc kubenswrapper[5024]: E1007 12:27:45.957375 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Oct 07 12:27:46 crc kubenswrapper[5024]: W1007 12:27:46.011100 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Oct 07 12:27:46 crc kubenswrapper[5024]: E1007 12:27:46.011197 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.814792 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070"} Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.814976 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.816388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.816439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.816461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817047 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413" exitCode=0 Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817160 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817184 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817247 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413"} Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817292 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817255 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817290 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.817987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818014 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818026 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818649 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.818854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:46 crc kubenswrapper[5024]: I1007 12:27:46.938056 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.682549 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825165 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3"} Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825227 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee"} Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825241 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84"} Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e"} Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825253 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825253 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825422 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825447 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825259 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70"} Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.825663 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.826755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.826813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.826836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827098 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827509 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:47 crc kubenswrapper[5024]: I1007 12:27:47.827547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.827642 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.827654 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.828744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.828809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.828825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.829968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.830010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:48 crc kubenswrapper[5024]: I1007 12:27:48.830023 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.157798 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.159674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.159729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.159745 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.159780 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.938565 5024 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 07 12:27:49 crc kubenswrapper[5024]: I1007 12:27:49.938632 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 07 12:27:50 crc kubenswrapper[5024]: I1007 12:27:50.719387 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:27:50 crc kubenswrapper[5024]: I1007 12:27:50.719653 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:50 crc kubenswrapper[5024]: I1007 12:27:50.720973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:50 crc kubenswrapper[5024]: I1007 12:27:50.721007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:50 crc kubenswrapper[5024]: I1007 12:27:50.721018 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:51 crc kubenswrapper[5024]: I1007 12:27:51.538804 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 07 12:27:51 crc kubenswrapper[5024]: I1007 12:27:51.539005 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:51 crc kubenswrapper[5024]: I1007 12:27:51.540301 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:51 crc kubenswrapper[5024]: I1007 12:27:51.540347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:51 crc kubenswrapper[5024]: I1007 12:27:51.540356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:52 crc kubenswrapper[5024]: I1007 12:27:52.607901 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:52 crc kubenswrapper[5024]: I1007 12:27:52.608069 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:52 crc kubenswrapper[5024]: I1007 12:27:52.609046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:52 crc kubenswrapper[5024]: I1007 12:27:52.609079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:52 crc kubenswrapper[5024]: I1007 12:27:52.609093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:52 crc kubenswrapper[5024]: E1007 12:27:52.848116 5024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.236433 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.236591 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.237589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.237636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.237651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.501351 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.501554 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.503203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.503273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.503298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.507292 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.847385 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.848493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.848539 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.848556 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:54 crc kubenswrapper[5024]: I1007 12:27:54.852015 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:27:55 crc kubenswrapper[5024]: I1007 12:27:55.852194 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:55 crc kubenswrapper[5024]: I1007 12:27:55.857480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:55 crc kubenswrapper[5024]: I1007 12:27:55.857538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:55 crc kubenswrapper[5024]: I1007 12:27:55.857551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:56 crc kubenswrapper[5024]: W1007 12:27:56.517361 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.517484 5024 trace.go:236] Trace[1632224161]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Oct-2025 12:27:46.516) (total time: 10001ms): Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1632224161]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:27:56.517) Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1632224161]: [10.001282109s] [10.001282109s] END Oct 07 12:27:56 crc kubenswrapper[5024]: E1007 12:27:56.517518 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 07 12:27:56 crc kubenswrapper[5024]: W1007 12:27:56.564053 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.564234 5024 trace.go:236] Trace[1575540439]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Oct-2025 12:27:46.562) (total time: 10001ms): Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1575540439]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:27:56.564) Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1575540439]: [10.001288932s] [10.001288932s] END Oct 07 12:27:56 crc kubenswrapper[5024]: E1007 12:27:56.564271 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.697229 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 07 12:27:56 crc kubenswrapper[5024]: W1007 12:27:56.830353 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.830656 5024 trace.go:236] Trace[1699857828]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Oct-2025 12:27:46.828) (total time: 10001ms): Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1699857828]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:27:56.830) Oct 07 12:27:56 crc kubenswrapper[5024]: Trace[1699857828]: [10.001851602s] [10.001851602s] END Oct 07 12:27:56 crc kubenswrapper[5024]: E1007 12:27:56.830715 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.855970 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.858241 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070" exitCode=255 Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.858293 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070"} Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.858461 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.859268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.859300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.859310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:56 crc kubenswrapper[5024]: I1007 12:27:56.859795 5024 scope.go:117] "RemoveContainer" containerID="a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070" Oct 07 12:27:56 crc kubenswrapper[5024]: E1007 12:27:56.998720 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.186c353685a5acef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-07 12:27:42.681754863 +0000 UTC m=+0.757541701,LastTimestamp:2025-10-07 12:27:42.681754863 +0000 UTC m=+0.757541701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.557323 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.557385 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.564187 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.564276 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.689478 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]log ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]etcd ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/generic-apiserver-start-informers ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-filter ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-apiextensions-informers ok Oct 07 12:27:57 crc kubenswrapper[5024]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Oct 07 12:27:57 crc kubenswrapper[5024]: [-]poststarthook/crd-informer-synced failed: reason withheld Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-system-namespaces-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 07 12:27:57 crc kubenswrapper[5024]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 07 12:27:57 crc kubenswrapper[5024]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/bootstrap-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/start-kube-aggregator-informers ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-registration-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-discovery-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]autoregister-completion ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-openapi-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 07 12:27:57 crc kubenswrapper[5024]: livez check failed Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.689540 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.862719 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.864230 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889"} Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.864354 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.865364 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.865412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:27:57 crc kubenswrapper[5024]: I1007 12:27:57.865425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:27:59 crc kubenswrapper[5024]: I1007 12:27:59.939296 5024 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 07 12:27:59 crc kubenswrapper[5024]: I1007 12:27:59.939408 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.120816 5024 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.430690 5024 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.671281 5024 apiserver.go:52] "Watching apiserver" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.680224 5024 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.680584 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.681112 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.681194 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.681295 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.681381 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:00 crc kubenswrapper[5024]: E1007 12:28:00.681616 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.681755 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:00 crc kubenswrapper[5024]: E1007 12:28:00.681818 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.682173 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:00 crc kubenswrapper[5024]: E1007 12:28:00.682223 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.685721 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.685884 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.686496 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.686589 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.686709 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.687221 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.687448 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.689063 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.689506 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.697493 5024 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.714511 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.730824 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.750606 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.765453 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.778516 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.791786 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:00 crc kubenswrapper[5024]: I1007 12:28:00.803612 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.559087 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.561985 5024 trace.go:236] Trace[1153046235]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Oct-2025 12:27:51.244) (total time: 11316ms): Oct 07 12:28:02 crc kubenswrapper[5024]: Trace[1153046235]: ---"Objects listed" error: 11316ms (12:28:02.561) Oct 07 12:28:02 crc kubenswrapper[5024]: Trace[1153046235]: [11.31694446s] [11.31694446s] END Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.562023 5024 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.564429 5024 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.567379 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.587503 5024 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667208 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667266 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667288 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667306 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667532 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.668997 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669028 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669051 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669075 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669097 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669121 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669162 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669205 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669229 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669393 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669421 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669452 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669476 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669500 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669523 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669547 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669568 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669593 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669615 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669637 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669661 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669686 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669709 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669757 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669780 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669800 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669845 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669865 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669908 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669931 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669953 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669975 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670002 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670023 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670045 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670090 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670115 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670160 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670186 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670208 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670236 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670257 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670278 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670302 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670349 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670380 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670404 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670430 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670482 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670564 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670589 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670616 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670643 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670668 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670694 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670716 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670765 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670787 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670813 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670839 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670866 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670916 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670939 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670985 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671011 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671035 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671059 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671107 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671133 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671176 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671199 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671225 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671251 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671277 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671300 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671323 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671345 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671369 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671391 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671414 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671437 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671460 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671483 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671525 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671550 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671573 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671598 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671618 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671639 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671661 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671683 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671706 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671727 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671748 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671770 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671794 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671846 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671869 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671891 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671913 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671936 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671960 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671982 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672006 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672028 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672053 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672076 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672099 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672203 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672228 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672254 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672275 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672297 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672352 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672375 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672398 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672420 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672557 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672592 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672616 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672637 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672659 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672682 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672707 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672731 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672757 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.667896 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.668063 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.668238 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672871 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673076 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.668966 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669030 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669274 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.669821 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670012 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670110 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670257 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670350 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670463 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670564 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670766 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670797 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670840 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.670851 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671014 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671057 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671185 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671511 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671530 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671524 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671553 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673365 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671772 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.671783 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672014 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672052 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672375 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672383 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672482 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672555 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673719 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673737 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673880 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673896 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.673958 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.672781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674113 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674156 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674167 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674211 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674252 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674278 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674363 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674387 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674420 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674450 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674485 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674501 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674518 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674551 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674568 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674584 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674600 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674616 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674633 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674648 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674666 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674681 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674697 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674713 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674729 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674745 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674762 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674778 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674794 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674809 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674824 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674840 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674857 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674873 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674906 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674923 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674938 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674954 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.674988 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675004 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675020 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675037 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675054 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675070 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675086 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675102 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675152 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675189 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675237 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675275 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675293 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675311 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675331 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675402 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675419 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675508 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675528 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675539 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675548 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675557 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675566 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675575 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675585 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675594 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675603 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675612 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675621 5024 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675629 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675638 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675647 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675656 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675665 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675674 5024 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675722 5024 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675755 5024 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675783 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675798 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675808 5024 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675819 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675830 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675840 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675853 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675865 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675876 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675888 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675898 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675907 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675918 5024 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675927 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675961 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675972 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675982 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675361 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690101 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675641 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675653 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675792 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675829 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675848 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675569 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.675996 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676168 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676477 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676487 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676542 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676755 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676772 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.676800 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677070 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677173 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677457 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677534 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677731 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677850 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677868 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.677900 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.678152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.678125 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.678311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.678499 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.678943 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679091 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679426 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679411 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679421 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679641 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.679856 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680523 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680578 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680638 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680724 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.680773 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681064 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681168 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681492 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681458 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681792 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.681832 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.682339 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.682613 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.682844 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.683126 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.683309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.683369 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.684153 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.684499 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.684778 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.685257 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.685988 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.686309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.687384 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.687998 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.688236 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.688845 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.688892 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689041 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689083 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689165 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689246 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689255 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689401 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.689496 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689629 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.689634 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690289 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690900 5024 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691149 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690925 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690794 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.690937 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691043 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691234 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.691244 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.693656 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:03.193635774 +0000 UTC m=+21.269422612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691362 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691423 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.691793 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.692076 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.692438 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.694038 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.694296 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.694310 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.694461 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.694534 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.694599 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.694702 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:03.194685924 +0000 UTC m=+21.270472762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.694699 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.695287 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.695468 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:03.195450756 +0000 UTC m=+21.271237624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.695688 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.695857 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.695887 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.695930 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.696068 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:03.196051733 +0000 UTC m=+21.271838701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.696439 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.696526 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.696863 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.697264 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.697490 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.697601 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.697928 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.698313 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.698367 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.699239 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.699350 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.700003 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.701822 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.705878 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.706069 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.706201 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.706254 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.706421 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:03.206374097 +0000 UTC m=+21.282161045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.706584 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.707513 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.707616 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.708661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.710990 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.711217 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.711437 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.711511 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.712059 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.712179 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.712912 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.713045 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.715879 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.716525 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.717016 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.719101 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.720972 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.721155 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.721541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.723028 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.725022 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.732444 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.733734 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.733906 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.734613 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.734692 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.734708 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735329 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735572 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735606 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735652 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735675 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735688 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.735890 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.736290 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.736490 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.736677 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.736702 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.736770 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.737287 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.737380 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.737488 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.737541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.737881 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.738788 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.738848 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.738961 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.739045 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.739108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.743258 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.743334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.743517 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.743797 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.743925 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.746320 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.748665 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.750474 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.750590 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.750663 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.750727 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.750860 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.750937 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.751943 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.755236 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.756101 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.757451 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.758217 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.759192 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.759748 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.760432 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.761053 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.761314 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.762508 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.763369 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.764614 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.765499 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.767024 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.767729 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.768447 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.769416 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.769722 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.770297 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.771350 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.771831 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.772453 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.773806 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.774332 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.775303 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.775767 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.776788 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777227 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777388 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777446 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777492 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777394 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777596 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777605 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777613 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777622 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777633 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777645 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777657 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777667 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777675 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777683 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777690 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777698 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777705 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777713 5024 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777721 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777729 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777736 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777745 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777753 5024 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777761 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777769 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777776 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777784 5024 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777791 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777799 5024 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777807 5024 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777814 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777822 5024 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777830 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777838 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777846 5024 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777853 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777860 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777868 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777875 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777883 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777891 5024 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777899 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777907 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777915 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777923 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777931 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777939 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777954 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777962 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777970 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777978 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777986 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.777996 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778005 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778014 5024 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778024 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778036 5024 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778092 5024 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778101 5024 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778110 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778117 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778125 5024 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778152 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778163 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778173 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778183 5024 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778423 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778436 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778445 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778453 5024 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778462 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778472 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778483 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778494 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778504 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778515 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778527 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778539 5024 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778550 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778559 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778568 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778577 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778586 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778595 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778603 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778612 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778621 5024 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778629 5024 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778639 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778648 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778656 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778664 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778679 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778687 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778695 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778704 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778713 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778721 5024 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778729 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778739 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778749 5024 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778759 5024 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778770 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778781 5024 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778789 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778797 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778805 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778814 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778822 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778830 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778838 5024 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778872 5024 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778882 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778890 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778898 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778906 5024 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778915 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778923 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778932 5024 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778939 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778947 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778987 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.778996 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779005 5024 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779013 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779020 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779028 5024 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779037 5024 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779044 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779052 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779061 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779069 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779077 5024 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779085 5024 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779093 5024 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779102 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779110 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779118 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779128 5024 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779148 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779156 5024 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779164 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779171 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779179 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779187 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779195 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779203 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779211 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779219 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779227 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779235 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779242 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779251 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779261 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779269 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779279 5024 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779287 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779294 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779302 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779309 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779482 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.779824 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.781918 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.783520 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.784077 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.785198 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.785654 5024 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.785760 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.787863 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.788562 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.789116 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.791071 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.792361 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.792553 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.792990 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.794263 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.794970 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.796260 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.796938 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.797916 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.798966 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.800124 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.800549 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.801534 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.804885 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.805797 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.806107 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.807325 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.807884 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.808483 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.812774 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.813620 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.814372 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.814859 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.817801 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.822052 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 07 12:28:02 crc kubenswrapper[5024]: W1007 12:28:02.838629 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7efd82afecc831da33d610f54224e18bc0158ce534cd533d7889688e42f5c693 WatchSource:0}: Error finding container 7efd82afecc831da33d610f54224e18bc0158ce534cd533d7889688e42f5c693: Status 404 returned error can't find the container with id 7efd82afecc831da33d610f54224e18bc0158ce534cd533d7889688e42f5c693 Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.838693 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.849327 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.858549 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.869374 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.879063 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.879267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3d0af669b983f0d46fc93b131d959913570b8cb8eefa675c5e49aec934a01e8f"} Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.881467 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7efd82afecc831da33d610f54224e18bc0158ce534cd533d7889688e42f5c693"} Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.884025 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3b9b4a2492a59b80fe2a9f8886aae9ecf71237e100a67ce22918d8f7f37f0e35"} Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.890580 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: E1007 12:28:02.891567 5024 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.902919 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.912174 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.923913 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:02 crc kubenswrapper[5024]: I1007 12:28:02.933846 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.282406 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282541 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:04.282515474 +0000 UTC m=+22.358302312 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.282774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.282798 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.282820 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.282839 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282908 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282920 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282922 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282927 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282962 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:04.282950146 +0000 UTC m=+22.358736974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282969 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282971 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282979 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282976 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:04.282970157 +0000 UTC m=+22.358756995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.282988 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.283000 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:04.282994018 +0000 UTC m=+22.358780856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:03 crc kubenswrapper[5024]: E1007 12:28:03.283027 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:04.283018238 +0000 UTC m=+22.358805076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.887893 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856"} Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.887952 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d"} Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.890744 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5"} Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.902982 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:03Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.915780 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:03Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.942282 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:03Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:03 crc kubenswrapper[5024]: I1007 12:28:03.979371 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:03Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.009389 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.025004 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.036658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.052089 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.065753 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.078835 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.090566 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.105193 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.118696 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.134506 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.263839 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.276059 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.278019 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.278510 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.289719 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.291955 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.292019 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.292048 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.292082 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.292108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292169 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.29212755 +0000 UTC m=+24.367914388 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292221 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292277 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292228 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292280 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.292268464 +0000 UTC m=+24.368055372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292339 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.292322346 +0000 UTC m=+24.368109184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292292 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292378 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292404 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.292397408 +0000 UTC m=+24.368184346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292678 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292700 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292710 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.292739 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.292731207 +0000 UTC m=+24.368518125 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.301896 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.315612 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.333813 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.343545 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rwxtd"] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.343872 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t95cr"] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.344026 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.344202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.347111 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lzg82"] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.347413 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.347748 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348271 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348420 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348441 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348596 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348610 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348771 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.348926 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.349071 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.349238 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.350169 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.350221 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.350312 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.358214 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.373001 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.386543 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.401398 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.414535 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.427111 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.443626 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.458067 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.469709 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.484840 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.493585 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-etc-kubernetes\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.493844 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-bin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.493950 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-os-release\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-netns\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494217 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcg7\" (UniqueName: \"kubernetes.io/projected/f1ac3df5-bf16-419a-87c5-9683eebe3506-kube-api-access-6pcg7\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494327 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-system-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494420 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-cnibin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494533 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-daemon-config\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494636 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/273432b3-0436-4a74-afa3-7070f9bf5b3b-rootfs\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494765 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-multus\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494873 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4e6d4cde-c984-4157-b9e1-25d800a74264-hosts-file\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.494975 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-k8s-cni-cncf-io\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2nr\" (UniqueName: \"kubernetes.io/projected/273432b3-0436-4a74-afa3-7070f9bf5b3b-kube-api-access-sw2nr\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495168 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rq2\" (UniqueName: \"kubernetes.io/projected/4e6d4cde-c984-4157-b9e1-25d800a74264-kube-api-access-56rq2\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495272 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-hostroot\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-conf-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495506 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/273432b3-0436-4a74-afa3-7070f9bf5b3b-proxy-tls\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495614 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/273432b3-0436-4a74-afa3-7070f9bf5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495727 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-socket-dir-parent\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-kubelet\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495916 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.495988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-cni-binary-copy\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.496056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-multus-certs\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.496864 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.559084 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.586202 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597637 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-bin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-etc-kubernetes\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597706 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-system-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-cnibin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597747 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-os-release\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597745 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-bin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-netns\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597793 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcg7\" (UniqueName: \"kubernetes.io/projected/f1ac3df5-bf16-419a-87c5-9683eebe3506-kube-api-access-6pcg7\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597826 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-multus\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597842 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-daemon-config\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/273432b3-0436-4a74-afa3-7070f9bf5b3b-rootfs\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597872 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4e6d4cde-c984-4157-b9e1-25d800a74264-hosts-file\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597887 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rq2\" (UniqueName: \"kubernetes.io/projected/4e6d4cde-c984-4157-b9e1-25d800a74264-kube-api-access-56rq2\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597904 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-k8s-cni-cncf-io\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597887 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-etc-kubernetes\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-system-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597921 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2nr\" (UniqueName: \"kubernetes.io/projected/273432b3-0436-4a74-afa3-7070f9bf5b3b-kube-api-access-sw2nr\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598043 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/273432b3-0436-4a74-afa3-7070f9bf5b3b-rootfs\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598062 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-hostroot\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-netns\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598088 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-conf-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598116 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/273432b3-0436-4a74-afa3-7070f9bf5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598162 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/273432b3-0436-4a74-afa3-7070f9bf5b3b-proxy-tls\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598183 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-os-release\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598192 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-socket-dir-parent\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598252 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-kubelet\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598288 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-cni-multus\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598296 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-socket-dir-parent\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598295 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-cni-binary-copy\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598333 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-hostroot\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598361 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4e6d4cde-c984-4157-b9e1-25d800a74264-hosts-file\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598369 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-multus-certs\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598402 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-conf-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598530 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-k8s-cni-cncf-io\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.597899 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-cnibin\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-run-multus-certs\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.598597 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-host-var-lib-kubelet\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.599155 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-cni-dir\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.599278 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-multus-daemon-config\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.599318 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/273432b3-0436-4a74-afa3-7070f9bf5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.599178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f1ac3df5-bf16-419a-87c5-9683eebe3506-cni-binary-copy\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.603190 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/273432b3-0436-4a74-afa3-7070f9bf5b3b-proxy-tls\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.616626 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcg7\" (UniqueName: \"kubernetes.io/projected/f1ac3df5-bf16-419a-87c5-9683eebe3506-kube-api-access-6pcg7\") pod \"multus-rwxtd\" (UID: \"f1ac3df5-bf16-419a-87c5-9683eebe3506\") " pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.620445 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2nr\" (UniqueName: \"kubernetes.io/projected/273432b3-0436-4a74-afa3-7070f9bf5b3b-kube-api-access-sw2nr\") pod \"machine-config-daemon-t95cr\" (UID: \"273432b3-0436-4a74-afa3-7070f9bf5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.629369 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rq2\" (UniqueName: \"kubernetes.io/projected/4e6d4cde-c984-4157-b9e1-25d800a74264-kube-api-access-56rq2\") pod \"node-resolver-lzg82\" (UID: \"4e6d4cde-c984-4157-b9e1-25d800a74264\") " pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.658079 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.664190 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rwxtd" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.670803 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lzg82" Oct 07 12:28:04 crc kubenswrapper[5024]: W1007 12:28:04.680457 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273432b3_0436_4a74_afa3_7070f9bf5b3b.slice/crio-01ec270133bddd4275bbc7c3af0128b42d2439225a1d0ff732a40631890aa960 WatchSource:0}: Error finding container 01ec270133bddd4275bbc7c3af0128b42d2439225a1d0ff732a40631890aa960: Status 404 returned error can't find the container with id 01ec270133bddd4275bbc7c3af0128b42d2439225a1d0ff732a40631890aa960 Oct 07 12:28:04 crc kubenswrapper[5024]: W1007 12:28:04.696402 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6d4cde_c984_4157_b9e1_25d800a74264.slice/crio-e83f6007c471722717a29114ece8adbc1d0cf3b4d08a5a7409dbf257f97bcb91 WatchSource:0}: Error finding container e83f6007c471722717a29114ece8adbc1d0cf3b4d08a5a7409dbf257f97bcb91: Status 404 returned error can't find the container with id e83f6007c471722717a29114ece8adbc1d0cf3b4d08a5a7409dbf257f97bcb91 Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.740285 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-blx4r"] Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.740955 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.741619 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9b4h6"] Oct 07 12:28:04 crc kubenswrapper[5024]: W1007 12:28:04.746605 5024 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.746646 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 07 12:28:04 crc kubenswrapper[5024]: W1007 12:28:04.746699 5024 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.746711 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.746909 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.751230 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.751359 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.751438 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.751504 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.751563 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:04 crc kubenswrapper[5024]: E1007 12:28:04.751625 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.752943 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.753046 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.753350 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.753455 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.753589 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.753701 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.756093 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.756381 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.765776 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.779757 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.792220 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.804494 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.814592 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.829698 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.844827 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.857790 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.874438 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.894965 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lzg82" event={"ID":"4e6d4cde-c984-4157-b9e1-25d800a74264","Type":"ContainerStarted","Data":"e83f6007c471722717a29114ece8adbc1d0cf3b4d08a5a7409dbf257f97bcb91"} Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.896586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerStarted","Data":"a3ff8700b512d01598ec726d1df0a4f4ccef868bd7f370e9a6b5435a2eff3b7f"} Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.897292 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.898029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"01ec270133bddd4275bbc7c3af0128b42d2439225a1d0ff732a40631890aa960"} Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899662 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899688 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899718 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899819 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899880 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899909 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.899970 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-cnibin\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900003 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-os-release\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900032 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900090 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-system-cni-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstdr\" (UniqueName: \"kubernetes.io/projected/911ebab3-c489-4067-b3af-80e52173c9b3-kube-api-access-vstdr\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900194 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900231 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsgk\" (UniqueName: \"kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900298 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900338 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900373 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900417 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900448 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900479 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900511 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900542 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900566 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900629 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900677 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900717 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.900754 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.913692 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.927503 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.943934 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.956736 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:04 crc kubenswrapper[5024]: I1007 12:28:04.991008 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:04Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.001959 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.001990 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfsgk\" (UniqueName: \"kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002048 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002060 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002069 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002123 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002203 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002272 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002294 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002337 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002343 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002381 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002435 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002467 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002477 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002485 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002505 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002562 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002584 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002631 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002621 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002663 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002665 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002706 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002727 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-os-release\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002749 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002779 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-cnibin\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002812 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-system-cni-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.002835 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstdr\" (UniqueName: \"kubernetes.io/projected/911ebab3-c489-4067-b3af-80e52173c9b3-kube-api-access-vstdr\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003159 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003173 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-os-release\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003246 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-system-cni-dir\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003319 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/911ebab3-c489-4067-b3af-80e52173c9b3-cnibin\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003312 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003364 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003496 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.003788 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-binary-copy\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.006272 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.008406 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.030803 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstdr\" (UniqueName: \"kubernetes.io/projected/911ebab3-c489-4067-b3af-80e52173c9b3-kube-api-access-vstdr\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.031563 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfsgk\" (UniqueName: \"kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk\") pod \"ovnkube-node-9b4h6\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.037561 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.087871 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.118731 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.167981 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.202209 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.220548 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.224275 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.233718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: W1007 12:28:05.236111 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5e4e6d_289a_4fc4_9672_2709c87b5258.slice/crio-13528f304b54836c3aaf59abaf5d72b6f1d3b65a59e4ed15c829a9c6966c43fa WatchSource:0}: Error finding container 13528f304b54836c3aaf59abaf5d72b6f1d3b65a59e4ed15c829a9c6966c43fa: Status 404 returned error can't find the container with id 13528f304b54836c3aaf59abaf5d72b6f1d3b65a59e4ed15c829a9c6966c43fa Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.246881 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.582544 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.901920 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lzg82" event={"ID":"4e6d4cde-c984-4157-b9e1-25d800a74264","Type":"ContainerStarted","Data":"4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.903894 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerStarted","Data":"ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.905563 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.907125 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.907173 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.908494 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" exitCode=0 Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.908528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.908547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"13528f304b54836c3aaf59abaf5d72b6f1d3b65a59e4ed15c829a9c6966c43fa"} Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.917894 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.928645 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.939973 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.953926 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.972865 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:05 crc kubenswrapper[5024]: I1007 12:28:05.986548 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.001501 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.003666 5024 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.003729 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist podName:911ebab3-c489-4067-b3af-80e52173c9b3 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:06.503713652 +0000 UTC m=+24.579500490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-blx4r" (UID: "911ebab3-c489-4067-b3af-80e52173c9b3") : failed to sync configmap cache: timed out waiting for the condition Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.016447 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.030281 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.048049 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.062406 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.072078 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.080395 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.099280 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.118596 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.132043 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.146547 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.160650 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.179711 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.190759 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.203223 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.212872 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.216202 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.240405 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.256024 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.272788 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.285894 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.316944 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.317073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317092 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:10.317074265 +0000 UTC m=+28.392861093 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.317115 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.317150 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.317174 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317215 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317232 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317245 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317245 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317258 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317297 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317276 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:10.317270181 +0000 UTC m=+28.393057019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317330 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317342 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317356 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:10.317333582 +0000 UTC m=+28.393120460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317376 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:10.317367133 +0000 UTC m=+28.393154051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.317391 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:10.317383224 +0000 UTC m=+28.393170152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.518273 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.519033 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/911ebab3-c489-4067-b3af-80e52173c9b3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-blx4r\" (UID: \"911ebab3-c489-4067-b3af-80e52173c9b3\") " pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.695320 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-blx4r" Oct 07 12:28:06 crc kubenswrapper[5024]: W1007 12:28:06.715739 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911ebab3_c489_4067_b3af_80e52173c9b3.slice/crio-5fdb97ca52177e13f986cd7ff2ac5c1af55d3b433f37a84240046f2d2daea15c WatchSource:0}: Error finding container 5fdb97ca52177e13f986cd7ff2ac5c1af55d3b433f37a84240046f2d2daea15c: Status 404 returned error can't find the container with id 5fdb97ca52177e13f986cd7ff2ac5c1af55d3b433f37a84240046f2d2daea15c Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.750876 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.750985 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.751043 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.751178 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.751240 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:06 crc kubenswrapper[5024]: E1007 12:28:06.751332 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916306 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916314 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916322 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.916332 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.917979 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerStarted","Data":"5fdb97ca52177e13f986cd7ff2ac5c1af55d3b433f37a84240046f2d2daea15c"} Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.942421 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.948072 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.950322 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.954040 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.964412 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.974718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:06 crc kubenswrapper[5024]: I1007 12:28:06.986781 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.000698 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.020526 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.038263 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.049250 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.095165 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.128745 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.144909 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.156426 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.165526 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.177710 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.187718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.198069 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.215919 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.228690 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.240800 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.255914 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.267589 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.280604 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.300916 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.315202 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.329026 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.340056 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.350202 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.397050 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-flt2r"] Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.397401 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.400976 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.401175 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.401363 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.403598 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.411859 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.423559 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.435745 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.447983 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.471423 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.501410 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.515584 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.525626 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frszd\" (UniqueName: \"kubernetes.io/projected/88b45b7f-7a47-4643-87f1-4aa98912c0a9-kube-api-access-frszd\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.525675 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88b45b7f-7a47-4643-87f1-4aa98912c0a9-serviceca\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.525772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88b45b7f-7a47-4643-87f1-4aa98912c0a9-host\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.527765 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.545879 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.558091 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.568965 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.577961 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.589599 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.601629 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.615156 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.626888 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88b45b7f-7a47-4643-87f1-4aa98912c0a9-host\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.626947 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frszd\" (UniqueName: \"kubernetes.io/projected/88b45b7f-7a47-4643-87f1-4aa98912c0a9-kube-api-access-frszd\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.626973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88b45b7f-7a47-4643-87f1-4aa98912c0a9-serviceca\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.627036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88b45b7f-7a47-4643-87f1-4aa98912c0a9-host\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.628035 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88b45b7f-7a47-4643-87f1-4aa98912c0a9-serviceca\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.647639 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frszd\" (UniqueName: \"kubernetes.io/projected/88b45b7f-7a47-4643-87f1-4aa98912c0a9-kube-api-access-frszd\") pod \"node-ca-flt2r\" (UID: \"88b45b7f-7a47-4643-87f1-4aa98912c0a9\") " pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.710350 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flt2r" Oct 07 12:28:07 crc kubenswrapper[5024]: W1007 12:28:07.723979 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b45b7f_7a47_4643_87f1_4aa98912c0a9.slice/crio-4843467205a61f43e3a1889e66bbb4937e5b2cb3216fe9bb2b5637157101903b WatchSource:0}: Error finding container 4843467205a61f43e3a1889e66bbb4937e5b2cb3216fe9bb2b5637157101903b: Status 404 returned error can't find the container with id 4843467205a61f43e3a1889e66bbb4937e5b2cb3216fe9bb2b5637157101903b Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.756420 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:07 crc kubenswrapper[5024]: E1007 12:28:07.756583 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.923818 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2" exitCode=0 Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.923894 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2"} Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.925148 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flt2r" event={"ID":"88b45b7f-7a47-4643-87f1-4aa98912c0a9","Type":"ContainerStarted","Data":"4843467205a61f43e3a1889e66bbb4937e5b2cb3216fe9bb2b5637157101903b"} Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.935773 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.946862 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.959306 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.972176 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.985704 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:07 crc kubenswrapper[5024]: I1007 12:28:07.999310 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:07Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.011995 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.027798 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.045794 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.057566 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.071644 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.099180 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.138620 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.185091 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.224049 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.750869 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.750886 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:08 crc kubenswrapper[5024]: E1007 12:28:08.751009 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:08 crc kubenswrapper[5024]: E1007 12:28:08.751095 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.930189 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64"} Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.930102 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64" exitCode=0 Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.932774 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flt2r" event={"ID":"88b45b7f-7a47-4643-87f1-4aa98912c0a9","Type":"ContainerStarted","Data":"71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699"} Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.948491 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.967092 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.968094 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.974086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.974229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.974275 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.974395 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.981671 5024 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.981897 5024 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.982721 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.983056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.983235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.983278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.983300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.983314 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:08Z","lastTransitionTime":"2025-10-07T12:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:08 crc kubenswrapper[5024]: E1007 12:28:08.995305 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.995958 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:08Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.998565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.998599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.998607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.998622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:08 crc kubenswrapper[5024]: I1007 12:28:08.998633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:08Z","lastTransitionTime":"2025-10-07T12:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.008872 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.010387 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.014213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.014266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.014282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.014304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.014320 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.027220 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.028245 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.033434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.033510 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.033520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.033537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.033546 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.045320 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.048587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.048617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.048627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.048641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.048650 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.049156 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.059495 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.059613 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.061066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.061096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.061105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.061121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.061163 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.063845 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.080697 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.093200 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.105642 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.120079 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.133134 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.146904 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.161099 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.173018 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.185687 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.198532 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.200893 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.200925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.200932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.200946 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.200955 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.211724 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.229207 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.247249 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.259128 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.270637 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.282798 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.291646 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.300158 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.302588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.302620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.302629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.302644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.302654 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.337098 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.376899 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.404470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.404506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.404515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.404531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.404542 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.417187 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.462193 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.507275 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.507322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.507333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.507350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.507361 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.609831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.609870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.609880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.609896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.609906 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.712128 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.712173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.712180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.712194 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.712206 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.750622 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:09 crc kubenswrapper[5024]: E1007 12:28:09.750755 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.813980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.814010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.814019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.814032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.814054 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.916368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.916394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.916404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.916416 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.916424 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:09Z","lastTransitionTime":"2025-10-07T12:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.938026 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.940990 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6" exitCode=0 Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.941741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6"} Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.965811 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:09 crc kubenswrapper[5024]: I1007 12:28:09.992325 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:09Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.020876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.020919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.020929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.020947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.020959 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.023210 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.038799 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.055953 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.077712 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.093019 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.105505 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124015 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124307 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.124349 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.135248 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.146903 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.157032 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.169181 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.182180 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.197371 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.226579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.226644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.226657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.226674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.226688 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.328826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.329188 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.329198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.329213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.329223 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.354058 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.354435 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.354495 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.354521 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.354551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354601 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354631 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354645 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354657 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354677 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354689 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354698 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354711 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:18.354692326 +0000 UTC m=+36.430479204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354735 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:18.354722467 +0000 UTC m=+36.430509295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354748 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:18.354742668 +0000 UTC m=+36.430529506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354756 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.354786 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:18.354775319 +0000 UTC m=+36.430562187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.355073 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:18.355059827 +0000 UTC m=+36.430846665 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.431555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.431597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.431610 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.431630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.431644 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.534121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.534294 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.534394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.534483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.534571 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.637650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.637699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.637713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.637731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.637744 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.740432 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.740466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.740474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.740489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.740502 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.750802 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.750806 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.751207 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:10 crc kubenswrapper[5024]: E1007 12:28:10.751545 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.842677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.842716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.842725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.842741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.842751 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.946292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.946331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.946343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.946362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.946375 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:10Z","lastTransitionTime":"2025-10-07T12:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.950839 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerStarted","Data":"bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba"} Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.966857 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:10 crc kubenswrapper[5024]: I1007 12:28:10.991582 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.017475 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.035955 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.048394 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.049056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.049087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.049094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.049109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.049118 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.060325 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.070672 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.083968 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.095128 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.104555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.118065 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.130977 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.147586 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.150845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.150899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.150910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.150926 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.150936 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.160814 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.172949 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.256367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.257157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.257253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.257336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.257430 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.360062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.360088 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.360095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.360108 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.360117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.462537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.462566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.462579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.462594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.462603 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.564065 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.564097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.564105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.564118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.564127 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.666173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.666293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.666353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.666427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.666491 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.750913 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:11 crc kubenswrapper[5024]: E1007 12:28:11.751025 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.768316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.768352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.768360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.768373 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.768382 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.870600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.870643 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.870650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.870665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.870674 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.955702 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba" exitCode=0 Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.955782 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.959766 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.960257 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.960284 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.960341 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.971021 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.972503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.972536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.972546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.972562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.972574 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:11Z","lastTransitionTime":"2025-10-07T12:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.984676 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:11Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.995026 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:11 crc kubenswrapper[5024]: I1007 12:28:11.995122 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.004304 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.021671 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.032892 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.046598 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.057675 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.070342 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.074551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.074596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.074612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.074631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.074643 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.082463 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.095067 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.106979 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.119511 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.137932 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.152611 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.165286 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179308 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179598 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.179703 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.192460 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.205938 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.224738 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.238498 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.249968 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.261780 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.278164 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.281251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.281286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.281296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.281311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.281322 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.295550 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.307372 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.319702 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.334314 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.344678 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.352284 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.359992 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.383453 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.383479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.383486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.383499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.383507 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.485409 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.485444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.485456 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.485474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.485489 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.587585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.587629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.587641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.587656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.587666 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.690442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.690485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.690496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.690512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.690524 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.751043 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.751115 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:12 crc kubenswrapper[5024]: E1007 12:28:12.751202 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:12 crc kubenswrapper[5024]: E1007 12:28:12.751296 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.765116 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.777368 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.793087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.793128 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.793150 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.793165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.793173 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.795546 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.808274 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.824934 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.836787 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.856664 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.874035 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.885875 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896464 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896476 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896495 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.896514 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.909196 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.919647 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.929308 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.942210 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.952573 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.966973 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerStarted","Data":"e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909"} Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.984109 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.995614 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:12Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.999217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.999245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.999254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.999268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:12 crc kubenswrapper[5024]: I1007 12:28:12.999277 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:12Z","lastTransitionTime":"2025-10-07T12:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.006373 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.020870 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.033097 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.043701 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.054649 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.064587 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.074506 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.086586 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.099898 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.101164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.101197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.101207 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.101223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.101235 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.110563 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.120329 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.130024 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.146975 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.203747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.203788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.203799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.203816 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.203828 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.305839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.305873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.305881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.305894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.305903 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.408527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.408587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.408611 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.408644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.408708 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.512332 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.512402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.512415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.512435 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.512454 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.614828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.614886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.614894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.614907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.614918 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.652043 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.674632 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.688319 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.705759 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.717093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.717125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.717146 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.717161 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.717174 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.723391 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.735595 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.748608 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.750793 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:13 crc kubenswrapper[5024]: E1007 12:28:13.750987 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.767901 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.782189 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.800291 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.813536 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.819257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.819309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.819327 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.819350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.819368 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.829745 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.840666 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.857656 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.867772 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.879635 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.921382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.921417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.921427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.921442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.921453 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:13Z","lastTransitionTime":"2025-10-07T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.972589 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909" exitCode=0 Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.972810 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909"} Oct 07 12:28:13 crc kubenswrapper[5024]: I1007 12:28:13.989997 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:13Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.005549 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.016954 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.024715 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.024753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.024766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.024785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.024798 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.030343 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.046409 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.058538 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.070063 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.080970 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.093865 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.113320 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.127096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.127176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.127195 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.127215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.127230 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.130190 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.144779 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.181747 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.218753 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.229112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.229156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.229170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.229186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.229195 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.257640 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.331273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.331310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.331321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.331337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.331347 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.433353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.433398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.433410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.433430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.433441 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.535689 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.535731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.535742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.535761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.535771 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.637990 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.638320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.638339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.638356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.638368 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.740511 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.740547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.740557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.740576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.740585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.750868 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.750914 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:14 crc kubenswrapper[5024]: E1007 12:28:14.750974 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:14 crc kubenswrapper[5024]: E1007 12:28:14.751027 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.842551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.842578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.842587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.842600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.842609 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.944621 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.944660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.944669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.944685 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.944696 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:14Z","lastTransitionTime":"2025-10-07T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.978303 5024 generic.go:334] "Generic (PLEG): container finished" podID="911ebab3-c489-4067-b3af-80e52173c9b3" containerID="bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5" exitCode=0 Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.978391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerDied","Data":"bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.980274 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/0.log" Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.983265 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47" exitCode=1 Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.983298 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47"} Oct 07 12:28:14 crc kubenswrapper[5024]: I1007 12:28:14.984009 5024 scope.go:117] "RemoveContainer" containerID="8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.001327 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:14Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.014730 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.026712 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.042220 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.047033 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.047073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.047086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.047103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.047114 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.055749 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.067954 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.077356 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.091527 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.102917 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.115126 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.128514 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.140755 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.149462 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.149492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.149500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.149517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.149526 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.153297 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.165378 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.181652 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.193601 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.204875 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.217335 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.230399 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.242536 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.251112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.251158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.251169 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.251183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.251193 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.254955 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.266603 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.282212 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"message\\\":\\\"208] Removed *v1.Namespace event handler 1\\\\nI1007 12:28:14.761624 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1007 12:28:14.761637 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1007 12:28:14.761653 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1007 12:28:14.761665 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1007 12:28:14.761673 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1007 12:28:14.761678 6291 factory.go:656] Stopping watch factory\\\\nI1007 12:28:14.761679 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1007 12:28:14.761703 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1007 12:28:14.761744 6291 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761772 6291 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761811 6291 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.301984 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.325264 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.348752 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.353266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.353430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.353516 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.353600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.353670 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.376552 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.389266 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.418454 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.455625 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.455655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.455664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.455678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.455687 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.459970 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.558040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.558078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.558093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.558115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.558127 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.660155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.660206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.660222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.660244 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.660262 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.750669 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:15 crc kubenswrapper[5024]: E1007 12:28:15.750806 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.761973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.762002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.762010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.762025 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.762034 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.864203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.864237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.864245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.864259 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.864268 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.966796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.966829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.966839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.966857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.966877 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:15Z","lastTransitionTime":"2025-10-07T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.990236 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" event={"ID":"911ebab3-c489-4067-b3af-80e52173c9b3","Type":"ContainerStarted","Data":"12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d"} Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.993624 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/1.log" Oct 07 12:28:15 crc kubenswrapper[5024]: I1007 12:28:15.994223 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/0.log" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.000254 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.001912 5024 scope.go:117] "RemoveContainer" containerID="d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa" Oct 07 12:28:16 crc kubenswrapper[5024]: E1007 12:28:16.002218 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.006875 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.027279 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.044444 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.058720 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.068832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.068860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.068868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.068882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.068891 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.080199 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"message\\\":\\\"208] Removed *v1.Namespace event handler 1\\\\nI1007 12:28:14.761624 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1007 12:28:14.761637 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1007 12:28:14.761653 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1007 12:28:14.761665 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1007 12:28:14.761673 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1007 12:28:14.761678 6291 factory.go:656] Stopping watch factory\\\\nI1007 12:28:14.761679 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1007 12:28:14.761703 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1007 12:28:14.761744 6291 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761772 6291 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761811 6291 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.103269 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.121482 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.140952 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.155759 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.168394 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.171297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.171349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.171367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.171391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.171408 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.183100 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.197379 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.217021 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.233782 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.248876 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.262938 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.273892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.273926 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.273937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.273955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.273969 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.278636 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.296124 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.314393 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.333735 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.352299 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.375695 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"message\\\":\\\"208] Removed *v1.Namespace event handler 1\\\\nI1007 12:28:14.761624 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1007 12:28:14.761637 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1007 12:28:14.761653 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1007 12:28:14.761665 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1007 12:28:14.761673 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1007 12:28:14.761678 6291 factory.go:656] Stopping watch factory\\\\nI1007 12:28:14.761679 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1007 12:28:14.761703 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1007 12:28:14.761744 6291 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761772 6291 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761811 6291 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.376969 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.376995 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.377005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.377019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.377029 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.391550 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.423600 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.461724 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.479749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.479793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.479805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.479822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.479837 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.502708 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.525243 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6"] Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.525716 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.549867 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.551516 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.570955 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.583473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.583538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.583558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.583583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.583608 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.611261 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.611296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smp4t\" (UniqueName: \"kubernetes.io/projected/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-kube-api-access-smp4t\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.611328 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.611496 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.617403 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.658583 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.685357 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.685410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.685421 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.685437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.685447 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.700224 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.712168 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.712219 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.712247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smp4t\" (UniqueName: \"kubernetes.io/projected/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-kube-api-access-smp4t\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.712283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.713101 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.713356 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.721998 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.739689 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.751086 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.751086 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:16 crc kubenswrapper[5024]: E1007 12:28:16.751307 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:16 crc kubenswrapper[5024]: E1007 12:28:16.751345 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.768437 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smp4t\" (UniqueName: \"kubernetes.io/projected/295aa2ac-5da1-4e47-ad64-b8e7c34e576a-kube-api-access-smp4t\") pod \"ovnkube-control-plane-749d76644c-bzcv6\" (UID: \"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.787866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.788070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.788159 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.788243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.788319 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.801882 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.839970 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.840163 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: W1007 12:28:16.855307 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod295aa2ac_5da1_4e47_ad64_b8e7c34e576a.slice/crio-52ec9b2aa18fca8fd0b1bba140de0ef41cbc660ff6b46b2297ece6d9cae2bae8 WatchSource:0}: Error finding container 52ec9b2aa18fca8fd0b1bba140de0ef41cbc660ff6b46b2297ece6d9cae2bae8: Status 404 returned error can't find the container with id 52ec9b2aa18fca8fd0b1bba140de0ef41cbc660ff6b46b2297ece6d9cae2bae8 Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.879110 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.890552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.890782 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.890870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.890962 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.891047 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.927101 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"message\\\":\\\"208] Removed *v1.Namespace event handler 1\\\\nI1007 12:28:14.761624 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1007 12:28:14.761637 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1007 12:28:14.761653 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1007 12:28:14.761665 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1007 12:28:14.761658 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1007 12:28:14.761673 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1007 12:28:14.761678 6291 factory.go:656] Stopping watch factory\\\\nI1007 12:28:14.761679 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1007 12:28:14.761703 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1007 12:28:14.761744 6291 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761772 6291 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1007 12:28:14.761811 6291 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.962061 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.994128 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.994178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:16 crc kubenswrapper[5024]: I1007 12:28:16.994187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:16.994203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:16.994213 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:16Z","lastTransitionTime":"2025-10-07T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:16.999606 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:16Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.003532 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" event={"ID":"295aa2ac-5da1-4e47-ad64-b8e7c34e576a","Type":"ContainerStarted","Data":"52ec9b2aa18fca8fd0b1bba140de0ef41cbc660ff6b46b2297ece6d9cae2bae8"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.005712 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/1.log" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.007706 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/0.log" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.010696 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa" exitCode=1 Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.010788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.010860 5024 scope.go:117] "RemoveContainer" containerID="8c147450f8b5cf4716379efdc3e8012f39f8b5d6433266e91634513e188abe47" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.012068 5024 scope.go:117] "RemoveContainer" containerID="d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa" Oct 07 12:28:17 crc kubenswrapper[5024]: E1007 12:28:17.012390 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.038486 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.083931 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.096503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.096546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.096558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.096575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.096587 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.131502 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.159700 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.197854 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.199152 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.199187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.199196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.199210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.199220 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.241555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.280337 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.301046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.301082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.301090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.301104 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.301115 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.317251 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.356955 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.400424 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.402818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.402847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.402857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.402871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.402882 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.439970 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.478553 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.504971 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.505068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.505082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.505101 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.505114 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.519504 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.565596 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.600826 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.607783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.607811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.607822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.607835 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.607845 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.647438 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.681587 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.710538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.710634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.710652 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.710675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.710692 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.719605 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.751312 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:17 crc kubenswrapper[5024]: E1007 12:28:17.751469 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.759880 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.809658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.814028 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.814069 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.814085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.814109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.814126 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.853085 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.879189 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.917001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.917059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.917076 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.917103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.917118 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:17Z","lastTransitionTime":"2025-10-07T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.921250 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:17 crc kubenswrapper[5024]: I1007 12:28:17.960253 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:17Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.004414 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.014957 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" event={"ID":"295aa2ac-5da1-4e47-ad64-b8e7c34e576a","Type":"ContainerStarted","Data":"5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.017292 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/1.log" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.018923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.019093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.019256 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.019408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.019555 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.122690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.122738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.122754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.122777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.122792 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.225814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.225859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.225869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.225888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.225901 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.327994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.328257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.328324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.328388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.328452 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.372789 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gtmmn"] Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.373506 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.373651 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.391120 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.414270 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428768 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428862 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428905 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428958 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.428986 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrt9\" (UniqueName: \"kubernetes.io/projected/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-kube-api-access-rmrt9\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.429012 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429117 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429187 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.429171346 +0000 UTC m=+52.504958184 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429421 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429491 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.429472355 +0000 UTC m=+52.505259283 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429513 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.429502816 +0000 UTC m=+52.505289774 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429572 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429595 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429608 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429661 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.42964345 +0000 UTC m=+52.505430288 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429793 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429911 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.429948 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.430022 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.430051 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.430062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.430076 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.430085 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.430102 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.430049641 +0000 UTC m=+52.505836699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.438363 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.454010 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.465128 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.477961 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.493647 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.503891 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.512666 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.522779 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.530053 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrt9\" (UniqueName: \"kubernetes.io/projected/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-kube-api-access-rmrt9\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.530101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.530419 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.530597 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:19.03055849 +0000 UTC m=+37.106345358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.535202 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.536863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.536899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.536915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.536937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.536953 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.544824 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrt9\" (UniqueName: \"kubernetes.io/projected/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-kube-api-access-rmrt9\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.545871 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.556592 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.583733 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.618264 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.638806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.638854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.638862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.638878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.638886 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.658800 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.702360 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:18Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.741465 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.741812 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.742028 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.742205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.742340 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.750785 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.750821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.750945 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:18 crc kubenswrapper[5024]: E1007 12:28:18.751023 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.844726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.844766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.844779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.844793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.844802 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.946935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.946970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.946986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.947001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:18 crc kubenswrapper[5024]: I1007 12:28:18.947020 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:18Z","lastTransitionTime":"2025-10-07T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.027010 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" event={"ID":"295aa2ac-5da1-4e47-ad64-b8e7c34e576a","Type":"ContainerStarted","Data":"149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.034427 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.034577 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.034648 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:20.034619577 +0000 UTC m=+38.110406415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.043522 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.048892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.048960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.048970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.048992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.049005 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.060195 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.077570 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.089733 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.100637 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.114375 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.128228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.128262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.128270 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.128284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.128292 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.137060 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.139606 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.143046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.143075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.143082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.143096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.143105 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.152471 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.166003 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.166275 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.169261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.169303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.169316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.169333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.169344 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.181503 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.185019 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.188073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.188105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.188115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.188129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.188149 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.199746 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.203658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.204129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.204215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.204233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.204255 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.204271 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.220810 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.220981 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.222876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.222929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.222940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.222955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.222965 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.228338 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.239679 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.256295 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.299181 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.324957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.324995 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.325006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.325021 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.325031 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.336651 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.379555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:19Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.426850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.426991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.427001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.427016 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.427026 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.529918 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.529968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.529985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.530004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.530015 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.632898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.632944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.632953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.632967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.632977 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.735849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.736019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.736097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.736328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.736378 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.750644 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:19 crc kubenswrapper[5024]: E1007 12:28:19.750760 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.839375 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.839427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.839440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.839460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.839474 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.942196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.942252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.942296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.942320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:19 crc kubenswrapper[5024]: I1007 12:28:19.942339 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:19Z","lastTransitionTime":"2025-10-07T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.043946 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:20 crc kubenswrapper[5024]: E1007 12:28:20.044244 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:20 crc kubenswrapper[5024]: E1007 12:28:20.044386 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:22.044355316 +0000 UTC m=+40.120142204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.045339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.045369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.045380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.045395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.045406 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.147688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.147760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.147783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.147813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.147836 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.249915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.249956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.249966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.250007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.250018 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.352336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.352387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.352396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.352411 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.352420 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.456808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.456854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.456866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.456883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.456895 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.559848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.559922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.559940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.559970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.559988 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.662490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.662529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.662546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.662563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.662572 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.750721 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:20 crc kubenswrapper[5024]: E1007 12:28:20.751208 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.750764 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:20 crc kubenswrapper[5024]: E1007 12:28:20.751495 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.750764 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:20 crc kubenswrapper[5024]: E1007 12:28:20.751787 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.766729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.766805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.766826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.766857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.766878 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.870083 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.870406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.870501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.870590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.870682 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.973903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.973948 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.973956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.973973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:20 crc kubenswrapper[5024]: I1007 12:28:20.973983 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:20Z","lastTransitionTime":"2025-10-07T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.077174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.077218 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.077229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.077248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.077260 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.179952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.180020 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.180043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.180079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.180117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.282740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.282771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.282781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.282796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.282806 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.385475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.385560 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.385578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.385609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.385631 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.488526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.488577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.488612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.488626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.488635 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.592051 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.592531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.592798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.593046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.593360 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.697121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.697684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.697883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.698043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.698266 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.751473 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:21 crc kubenswrapper[5024]: E1007 12:28:21.752395 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.802129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.802251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.802264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.802280 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.802290 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.904968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.905006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.905016 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.905032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:21 crc kubenswrapper[5024]: I1007 12:28:21.905043 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:21Z","lastTransitionTime":"2025-10-07T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.008314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.008374 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.008387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.008407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.008421 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.066063 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:22 crc kubenswrapper[5024]: E1007 12:28:22.066256 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:22 crc kubenswrapper[5024]: E1007 12:28:22.066313 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:26.066297717 +0000 UTC m=+44.142084555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.110675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.110712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.110722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.110738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.110748 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.213290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.213358 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.213381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.213408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.213429 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.315901 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.315972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.316037 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.316070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.316091 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.418986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.419019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.419029 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.419044 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.419057 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.521229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.521290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.521311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.521341 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.521362 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.623895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.623922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.623930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.623944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.623953 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.726580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.726621 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.726632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.726646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.726656 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.750986 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.751021 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.751032 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:22 crc kubenswrapper[5024]: E1007 12:28:22.751126 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:22 crc kubenswrapper[5024]: E1007 12:28:22.751209 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:22 crc kubenswrapper[5024]: E1007 12:28:22.751288 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.764398 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.778103 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.790186 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.801479 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.813803 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.826325 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.829587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.829621 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.829635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.829656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.829672 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.838784 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.853538 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.866564 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.877890 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.890833 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.906640 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.924023 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.931671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.931698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.931707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.931722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.931730 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:22Z","lastTransitionTime":"2025-10-07T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.942680 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.957086 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.972615 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:22 crc kubenswrapper[5024]: I1007 12:28:22.994037 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:22Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.034503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.034553 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.034567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.034629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.034645 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.137846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.137889 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.137923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.137949 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.137960 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.241011 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.241079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.241095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.241119 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.241160 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.343692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.343771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.343794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.343832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.344088 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.446426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.446460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.446468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.446481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.446490 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.549713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.549773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.549780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.549795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.549804 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.652193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.652253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.652263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.652279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.652292 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.750673 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:23 crc kubenswrapper[5024]: E1007 12:28:23.750839 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.754719 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.754753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.754764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.754779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.754790 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.857503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.857559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.857573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.857597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.857613 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.959654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.959692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.959705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.959720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:23 crc kubenswrapper[5024]: I1007 12:28:23.959731 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:23Z","lastTransitionTime":"2025-10-07T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.062676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.062722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.062734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.062753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.062770 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.165495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.165561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.165573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.165591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.165603 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.267600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.267653 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.267665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.267676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.267685 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.371737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.371796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.371811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.371832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.371846 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.474368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.474402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.474412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.474426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.474435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.577510 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.577577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.577597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.577621 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.577634 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.680839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.680869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.680878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.680893 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.680903 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.751235 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.751320 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:24 crc kubenswrapper[5024]: E1007 12:28:24.751392 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.751257 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:24 crc kubenswrapper[5024]: E1007 12:28:24.751538 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:24 crc kubenswrapper[5024]: E1007 12:28:24.751810 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.783831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.784048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.784141 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.784225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.784290 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.887056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.887093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.887104 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.887127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.887143 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.989738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.990049 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.990140 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.990263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:24 crc kubenswrapper[5024]: I1007 12:28:24.990344 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:24Z","lastTransitionTime":"2025-10-07T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.093749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.093801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.093818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.093841 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.093859 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.196365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.196417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.196429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.196449 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.196462 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.300012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.300060 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.300070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.300085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.300094 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.402929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.402977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.402988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.403007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.403022 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.505551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.505607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.505625 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.505649 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.505667 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.607963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.608043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.608096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.608127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.608177 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.710125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.710181 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.710192 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.710208 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.710229 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.750512 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:25 crc kubenswrapper[5024]: E1007 12:28:25.750671 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.812792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.812833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.812842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.812856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.812866 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.914957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.914996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.915005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.915023 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:25 crc kubenswrapper[5024]: I1007 12:28:25.915033 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:25Z","lastTransitionTime":"2025-10-07T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.017681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.017720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.017728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.017741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.017750 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.107765 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:26 crc kubenswrapper[5024]: E1007 12:28:26.107880 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:26 crc kubenswrapper[5024]: E1007 12:28:26.107931 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:34.107917921 +0000 UTC m=+52.183704759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.119944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.119980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.119990 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.120004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.120013 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.222730 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.222768 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.222781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.222799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.222812 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.325751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.325798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.325810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.325830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.325841 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.428561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.428603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.428613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.428629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.428640 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.531933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.531967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.531977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.531992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.532004 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.634254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.634328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.634347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.634373 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.634392 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.737426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.737464 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.737475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.737489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.737500 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.751168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:26 crc kubenswrapper[5024]: E1007 12:28:26.751280 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.751176 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.751478 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:26 crc kubenswrapper[5024]: E1007 12:28:26.751496 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:26 crc kubenswrapper[5024]: E1007 12:28:26.751837 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.840449 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.840549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.840569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.840620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.840642 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.943385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.943446 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.943461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.943482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:26 crc kubenswrapper[5024]: I1007 12:28:26.943498 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:26Z","lastTransitionTime":"2025-10-07T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.046246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.046559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.046673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.046791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.046888 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.149334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.149367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.149378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.149396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.149407 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.251568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.251682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.251693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.251712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.251723 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.354132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.354463 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.354580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.354647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.354721 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.457319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.457355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.457366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.457381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.457392 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.560223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.560282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.560304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.560333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.560353 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.663167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.663217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.663233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.663254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.663281 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.750923 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:27 crc kubenswrapper[5024]: E1007 12:28:27.751101 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.765778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.765812 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.765820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.765833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.765842 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.869755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.869850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.869877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.869947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.869970 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.973003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.973048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.973063 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.973081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:27 crc kubenswrapper[5024]: I1007 12:28:27.973093 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:27Z","lastTransitionTime":"2025-10-07T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.075129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.075205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.075220 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.075243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.075258 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.177814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.177873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.177885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.177904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.177923 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.285248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.285290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.285300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.285314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.285324 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.388238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.388304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.388319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.388345 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.388362 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.490747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.490781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.490788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.490802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.490811 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.593446 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.593496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.593507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.593526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.593538 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.696055 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.696127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.696213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.696248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.696272 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.751568 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.751706 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:28 crc kubenswrapper[5024]: E1007 12:28:28.752370 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.751852 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:28 crc kubenswrapper[5024]: E1007 12:28:28.752479 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:28 crc kubenswrapper[5024]: E1007 12:28:28.751728 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.752508 5024 scope.go:117] "RemoveContainer" containerID="d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.798756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.798796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.798805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.798822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.798834 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.901262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.901539 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.901718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.901881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:28 crc kubenswrapper[5024]: I1007 12:28:28.902068 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:28Z","lastTransitionTime":"2025-10-07T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.004855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.004908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.004923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.004947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.004964 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.064835 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/1.log" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.068367 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.068822 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.082138 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.095150 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.105501 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.107087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.107124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.107137 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.107166 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.107179 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.117218 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.127034 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.136875 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.146798 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.164530 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.194728 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.209196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.209226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.209235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.209250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.209260 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.220499 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.241658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.257844 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.270466 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.279343 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.287912 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.297501 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.305833 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.311440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.311475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.311484 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.311502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.311514 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.413606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.413647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.413656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.413671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.413681 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.473202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.473501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.473613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.473684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.473740 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.486045 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.489795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.489837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.489848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.489864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.489876 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.501054 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.504164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.504206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.504216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.504234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.504248 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.515007 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.518276 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.518341 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.518352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.518370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.518383 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.530448 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.533891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.533926 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.533938 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.533954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.533966 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.545867 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.545995 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.547680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.547710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.547718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.547731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.547740 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.649589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.649824 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.649831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.649844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.649853 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.750436 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:29 crc kubenswrapper[5024]: E1007 12:28:29.750557 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.751679 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.751706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.751717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.751731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.751742 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.853618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.853656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.853668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.853684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.853697 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.956000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.956033 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.956040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.956053 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:29 crc kubenswrapper[5024]: I1007 12:28:29.956060 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:29Z","lastTransitionTime":"2025-10-07T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.058082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.058118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.058137 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.058180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.058193 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.072098 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/2.log" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.072674 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/1.log" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.074724 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" exitCode=1 Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.074762 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.074802 5024 scope.go:117] "RemoveContainer" containerID="d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.075683 5024 scope.go:117] "RemoveContainer" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" Oct 07 12:28:30 crc kubenswrapper[5024]: E1007 12:28:30.075827 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.089763 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.103077 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.115089 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.126240 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.147669 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d90148625643f830d5f729d3f2f6a79a3e565c16d3950e9688a177480a6044aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"message\\\":\\\"] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:15Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:15.964779 6456 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.160982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.161011 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.161019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.161032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.161041 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.162673 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.176270 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.191522 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.205888 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.223307 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.238236 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.249353 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.260645 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.263615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.263644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.263651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.263664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.263673 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.269173 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.278930 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.288578 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.297249 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:30Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.366015 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.366053 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.366062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.366077 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.366086 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.468594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.468622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.468631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.468644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.468653 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.570549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.570741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.570821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.570904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.571021 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.673626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.673957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.674056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.674132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.674229 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.750513 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.750513 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:30 crc kubenswrapper[5024]: E1007 12:28:30.750665 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.750690 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:30 crc kubenswrapper[5024]: E1007 12:28:30.750743 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:30 crc kubenswrapper[5024]: E1007 12:28:30.750801 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.776522 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.776565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.776575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.776594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.776604 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.878830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.878869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.878878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.878891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.878900 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.981362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.981420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.981441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.981469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:30 crc kubenswrapper[5024]: I1007 12:28:30.981490 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:30Z","lastTransitionTime":"2025-10-07T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.080641 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/2.log" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.083887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.083961 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.083976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.083996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.084042 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.086354 5024 scope.go:117] "RemoveContainer" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" Oct 07 12:28:31 crc kubenswrapper[5024]: E1007 12:28:31.086598 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.120468 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.145607 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.166914 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.186907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.186967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.186985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.187012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.187028 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.187660 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.209153 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.232452 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.256498 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.269029 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.282235 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.289697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.289759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.289778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.289805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.289822 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.294572 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.306462 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.319417 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.329510 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.340628 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.354560 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.366964 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.379842 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:31Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.396075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.396440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.396455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.396473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.396491 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.498565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.498595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.498603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.498615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.498624 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.600959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.600996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.601005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.601018 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.601027 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.702798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.702838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.702850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.702865 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.702876 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.750794 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:31 crc kubenswrapper[5024]: E1007 12:28:31.750913 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.805483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.805521 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.805529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.805543 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.805552 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.908601 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.908633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.908644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.908660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:31 crc kubenswrapper[5024]: I1007 12:28:31.908672 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:31Z","lastTransitionTime":"2025-10-07T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.011379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.011414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.011422 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.011437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.011445 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.114204 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.114253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.114271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.114293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.114310 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.217309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.217363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.217377 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.217394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.217404 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.319806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.319844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.319855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.319870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.319881 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.421607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.421658 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.421666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.421679 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.421688 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.524374 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.524423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.524431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.524445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.524454 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.626746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.626794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.626804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.626823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.626834 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.728958 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.729021 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.729041 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.729065 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.729082 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.751333 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.751367 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.751344 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:32 crc kubenswrapper[5024]: E1007 12:28:32.751507 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:32 crc kubenswrapper[5024]: E1007 12:28:32.751609 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:32 crc kubenswrapper[5024]: E1007 12:28:32.751810 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.782821 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.795022 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.810106 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.821071 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.831398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.831450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.831466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.831483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.831496 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.834974 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.849238 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.866659 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.877321 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.887444 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.896844 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.906028 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.915594 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.924074 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.933692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.933945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.936589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.936756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.936546 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.936842 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:32Z","lastTransitionTime":"2025-10-07T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.948432 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.957523 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:32 crc kubenswrapper[5024]: I1007 12:28:32.967769 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:32Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.039435 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.039501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.039511 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.039523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.039531 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.141988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.142046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.142064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.142085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.142098 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.244483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.244512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.244520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.244534 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.244542 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.346944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.347014 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.347030 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.347052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.347068 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.449547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.449590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.449607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.449623 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.449633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.552089 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.552135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.552165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.552180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.552190 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.654022 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.654079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.654097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.654121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.654169 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.750965 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:33 crc kubenswrapper[5024]: E1007 12:28:33.751087 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.756885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.756923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.756933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.756951 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.756961 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.859180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.859215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.859225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.859246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.859257 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.961511 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.961546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.961555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.961568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:33 crc kubenswrapper[5024]: I1007 12:28:33.961576 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:33Z","lastTransitionTime":"2025-10-07T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.063477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.063510 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.063519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.063532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.063540 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.165681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.165714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.165726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.165743 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.165756 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.193365 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.193469 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.193527 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:28:50.193510346 +0000 UTC m=+68.269297184 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.267497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.267783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.267863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.267975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.268053 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.370040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.370081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.370096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.370112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.370121 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.374346 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.384241 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.387684 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.398651 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.410453 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.423229 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.434458 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.445996 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.456433 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.472189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.472263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.472272 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.472287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.472296 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.476998 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.495947 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.496033 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.496057 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.496078 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.496100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496215 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496260 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:29:06.496248057 +0000 UTC m=+84.572034895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496423 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496442 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496451 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496473 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:29:06.496466793 +0000 UTC m=+84.572253631 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496636 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:29:06.496624818 +0000 UTC m=+84.572411656 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496427 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496802 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:29:06.496791602 +0000 UTC m=+84.572578440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496704 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496928 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.496994 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.497077 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:29:06.49706746 +0000 UTC m=+84.572854298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.499973 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.512533 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.523582 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.536451 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.546837 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.555703 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.566034 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.574420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.574458 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.574467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.574481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.574492 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.576820 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.588289 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:34Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.676669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.676700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.676709 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.676721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.676730 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.751076 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.751193 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.751225 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.751100 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.751354 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:34 crc kubenswrapper[5024]: E1007 12:28:34.751497 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.779647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.779686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.779694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.779723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.779734 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.881999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.882039 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.882048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.882063 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.882072 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.984920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.984970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.984978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.984991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:34 crc kubenswrapper[5024]: I1007 12:28:34.985000 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:34Z","lastTransitionTime":"2025-10-07T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.087091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.087124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.087133 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.087163 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.087176 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.189987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.190034 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.190045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.190062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.190074 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.292442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.292479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.292489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.292505 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.292514 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.395203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.395246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.395254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.395273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.395282 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.497933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.497972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.497983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.497997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.498006 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.601518 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.601589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.601608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.601638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.601661 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.704595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.704646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.704664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.704685 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.704701 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.751264 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:35 crc kubenswrapper[5024]: E1007 12:28:35.751415 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.807524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.807577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.807591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.807620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.807633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.909716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.909757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.909765 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.909781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:35 crc kubenswrapper[5024]: I1007 12:28:35.909797 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:35Z","lastTransitionTime":"2025-10-07T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.012094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.012181 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.012198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.012216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.012227 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.114349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.114393 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.114405 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.114423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.114435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.217452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.217493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.217502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.217517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.217526 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.319933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.319978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.319991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.320009 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.320022 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.421907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.421940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.421992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.422008 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.422016 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.524054 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.524084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.524093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.524107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.524116 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.626650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.626697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.626709 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.626729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.626741 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.730342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.730412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.730430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.730454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.730471 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.751386 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.751517 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:36 crc kubenswrapper[5024]: E1007 12:28:36.751960 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:36 crc kubenswrapper[5024]: E1007 12:28:36.752252 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.752305 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:36 crc kubenswrapper[5024]: E1007 12:28:36.752404 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.833897 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.833947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.833964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.833987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.834003 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.935931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.936008 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.936020 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.936044 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:36 crc kubenswrapper[5024]: I1007 12:28:36.936060 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:36Z","lastTransitionTime":"2025-10-07T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.038896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.038953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.038969 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.038991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.039008 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.140781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.140831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.140842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.140859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.140871 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.243795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.243844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.243860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.243882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.243899 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.347260 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.347305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.347320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.347340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.347353 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.450131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.450200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.450224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.450248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.450264 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.553281 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.553351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.553373 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.553401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.553426 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.655978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.656045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.656068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.656101 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.656126 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.751327 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:37 crc kubenswrapper[5024]: E1007 12:28:37.751490 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.759453 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.759493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.759501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.759516 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.759524 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.862290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.862357 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.862373 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.862398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.862416 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.965202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.965260 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.965277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.965305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:37 crc kubenswrapper[5024]: I1007 12:28:37.965327 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:37Z","lastTransitionTime":"2025-10-07T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.068180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.068229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.068245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.068267 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.068284 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.170533 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.170563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.170571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.170585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.170594 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.272821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.272859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.272868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.272900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.272942 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.375326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.375368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.375383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.375403 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.375417 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.477790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.478458 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.478492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.478520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.478540 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.581784 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.581831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.581844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.581860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.581870 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.684273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.684320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.684333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.684351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.684362 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.750955 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.751033 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.750972 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:38 crc kubenswrapper[5024]: E1007 12:28:38.751123 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:38 crc kubenswrapper[5024]: E1007 12:28:38.751263 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:38 crc kubenswrapper[5024]: E1007 12:28:38.751401 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.786698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.786736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.786749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.786763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.786774 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.888794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.888857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.888875 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.888898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.888915 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.992583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.992646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.992667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.992695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:38 crc kubenswrapper[5024]: I1007 12:28:38.992717 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:38Z","lastTransitionTime":"2025-10-07T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.095691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.095748 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.095767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.095791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.095807 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.198315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.198368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.198386 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.198408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.198425 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.301867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.301920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.301937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.301960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.301976 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.403974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.404027 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.404047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.404072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.404091 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.506581 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.506648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.506667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.506696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.506717 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.555349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.555395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.555407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.555423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.555434 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.574800 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:39Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.579296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.579367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.579385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.579410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.579427 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.601239 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:39Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.606073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.606111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.606123 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.606156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.606169 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.626314 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:39Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.630851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.630907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.630924 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.630989 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.631008 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.653674 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:39Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.658440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.658492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.658516 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.658544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.658566 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.679837 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:39Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.679942 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.681919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.681971 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.681994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.682021 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.682043 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.750872 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:39 crc kubenswrapper[5024]: E1007 12:28:39.750962 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.784287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.784436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.784457 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.784482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.784500 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.886819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.886872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.886886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.886907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.886922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.989017 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.989080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.989094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.989113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:39 crc kubenswrapper[5024]: I1007 12:28:39.989125 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:39Z","lastTransitionTime":"2025-10-07T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.092208 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.092258 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.092273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.092296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.092314 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.195114 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.195159 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.195168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.195183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.195190 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.297238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.297271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.297279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.297291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.297300 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.399125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.399185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.399195 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.399212 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.399224 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.501360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.501402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.501412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.501425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.501435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.604527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.604577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.604597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.604623 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.604640 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.707077 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.707113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.707124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.707162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.707177 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.751576 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.751576 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:40 crc kubenswrapper[5024]: E1007 12:28:40.751686 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.751768 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:40 crc kubenswrapper[5024]: E1007 12:28:40.751845 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:40 crc kubenswrapper[5024]: E1007 12:28:40.751942 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.809802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.809832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.809840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.809856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.809869 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.912379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.912699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.912826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.912966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:40 crc kubenswrapper[5024]: I1007 12:28:40.913112 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:40Z","lastTransitionTime":"2025-10-07T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.015920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.015973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.015986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.016005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.016020 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.117978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.118040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.118056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.118764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.118834 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.221184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.221225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.221240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.221259 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.221273 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.323991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.324031 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.324044 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.324060 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.324071 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.426206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.426266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.426278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.426295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.426306 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.528823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.528873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.528889 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.528906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.528919 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.631677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.631720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.631730 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.631748 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.631759 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.734739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.734808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.734825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.734851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.734869 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.751192 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:41 crc kubenswrapper[5024]: E1007 12:28:41.751381 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.837699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.837758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.837782 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.837820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.837843 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.940478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.940539 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.940557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.940593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:41 crc kubenswrapper[5024]: I1007 12:28:41.940609 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:41Z","lastTransitionTime":"2025-10-07T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.043766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.043853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.043874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.043899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.043916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.146629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.146683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.146695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.146713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.146724 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.249398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.249444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.249454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.249472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.249483 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.352405 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.352459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.352468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.352483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.352493 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.455124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.455382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.455478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.455573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.455686 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.557945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.558001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.558016 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.558036 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.558051 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.660839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.660874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.660885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.660898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.660906 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.750777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.750958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:42 crc kubenswrapper[5024]: E1007 12:28:42.751296 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.751026 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:42 crc kubenswrapper[5024]: E1007 12:28:42.751360 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:42 crc kubenswrapper[5024]: E1007 12:28:42.751502 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.762956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.763215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.763293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.763375 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.763460 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.767546 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.784783 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.803196 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.822572 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.846297 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.866050 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.866095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.866110 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.866131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.866170 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.879364 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.899013 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.913736 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.929122 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.944696 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.955244 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.966699 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.968080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.968251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.968353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.968482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.968585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:42Z","lastTransitionTime":"2025-10-07T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.978931 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:42 crc kubenswrapper[5024]: I1007 12:28:42.989492 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.000610 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:42Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.011472 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:43Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.021006 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:43Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.033320 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:43Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.071923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.071972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.071981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.072000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.072012 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.174508 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.174575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.174588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.174609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.174621 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.277052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.277085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.277113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.277129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.277153 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.379858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.379888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.379896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.379911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.379920 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.482437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.482493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.482509 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.482526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.482539 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.584654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.584691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.584703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.584720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.584732 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.687612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.687701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.687718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.687740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.687758 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.750749 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:43 crc kubenswrapper[5024]: E1007 12:28:43.751034 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.791127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.791211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.791223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.791272 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.791285 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.893671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.893730 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.893749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.893773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.893792 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.996301 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.996346 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.996355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.996372 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:43 crc kubenswrapper[5024]: I1007 12:28:43.996382 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:43Z","lastTransitionTime":"2025-10-07T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.098703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.098736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.098744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.098756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.098763 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.201501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.201569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.201586 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.201609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.201625 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.304539 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.304591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.304605 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.304624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.304635 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.407351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.407379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.407387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.407399 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.407410 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.510217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.510266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.510284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.510306 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.510368 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.612722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.612774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.612790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.612813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.612830 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.715096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.715204 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.715222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.715250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.715271 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.751410 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.751477 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:44 crc kubenswrapper[5024]: E1007 12:28:44.751539 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:44 crc kubenswrapper[5024]: E1007 12:28:44.751632 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.751883 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:44 crc kubenswrapper[5024]: E1007 12:28:44.751954 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.817716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.817763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.817773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.817788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.817799 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.920521 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.920588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.920606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.920630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:44 crc kubenswrapper[5024]: I1007 12:28:44.920648 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:44Z","lastTransitionTime":"2025-10-07T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.024543 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.024593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.024610 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.024630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.024646 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.127380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.127407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.127415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.127427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.127436 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.229071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.229120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.229151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.229171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.229182 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.330964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.331006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.331015 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.331031 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.331040 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.432715 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.432765 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.432773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.432787 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.432797 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.535078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.535116 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.535125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.535156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.535165 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.637928 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.637959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.637968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.637980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.637989 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.741513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.741591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.741613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.741643 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.741664 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.750809 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:45 crc kubenswrapper[5024]: E1007 12:28:45.750913 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.844633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.844678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.844687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.844704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.844714 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.947337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.947378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.947387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.947405 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:45 crc kubenswrapper[5024]: I1007 12:28:45.947414 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:45Z","lastTransitionTime":"2025-10-07T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.050128 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.050187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.050196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.050211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.050220 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.153495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.153578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.153603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.153634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.153655 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.256236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.256282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.256297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.256316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.256330 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.359433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.359485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.359498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.359516 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.359530 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.462133 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.462202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.462235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.462259 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.462270 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.565036 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.565105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.565115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.565132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.565158 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.667931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.668177 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.668251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.668321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.668391 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.751292 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.751365 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.751429 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.752162 5024 scope.go:117] "RemoveContainer" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" Oct 07 12:28:46 crc kubenswrapper[5024]: E1007 12:28:46.752553 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:46 crc kubenswrapper[5024]: E1007 12:28:46.752749 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:46 crc kubenswrapper[5024]: E1007 12:28:46.752810 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:46 crc kubenswrapper[5024]: E1007 12:28:46.752854 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.769900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.770179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.770334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.770569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.770803 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.901349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.901518 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.901577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.901658 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:46 crc kubenswrapper[5024]: I1007 12:28:46.901736 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:46Z","lastTransitionTime":"2025-10-07T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.004015 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.004288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.004383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.004448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.004521 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.106427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.106677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.106768 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.106858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.106934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.209859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.209899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.209908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.209923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.209934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.312075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.312173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.312186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.312202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.312217 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.414526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.414566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.414576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.414592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.414601 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.517472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.517523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.517539 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.517557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.517570 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.620130 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.620208 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.620220 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.620235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.620245 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.722673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.722738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.722749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.722761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.722770 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.750476 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:47 crc kubenswrapper[5024]: E1007 12:28:47.750631 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.827084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.827127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.827155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.827170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.827178 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.929401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.929446 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.929459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.929478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:47 crc kubenswrapper[5024]: I1007 12:28:47.929492 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:47Z","lastTransitionTime":"2025-10-07T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.031648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.031696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.031706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.031721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.031731 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.134520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.134571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.134581 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.134596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.134605 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.237171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.237230 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.237248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.237273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.237293 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.339390 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.339429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.339440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.339457 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.339494 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.441579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.441609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.441620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.441632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.441640 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.543915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.543957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.543968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.544007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.544020 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.646813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.646854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.646865 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.646880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.646891 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.749922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750534 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750639 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750693 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.750588 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:48 crc kubenswrapper[5024]: E1007 12:28:48.750818 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:48 crc kubenswrapper[5024]: E1007 12:28:48.751011 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:48 crc kubenswrapper[5024]: E1007 12:28:48.751226 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.852947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.853506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.853588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.853685 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.853815 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.956905 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.956988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.957005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.957028 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:48 crc kubenswrapper[5024]: I1007 12:28:48.957045 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:48Z","lastTransitionTime":"2025-10-07T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.059680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.059723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.059734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.059752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.059764 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.166342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.166381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.166390 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.166405 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.166414 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.268888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.268917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.268925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.268937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.268946 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.371363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.371395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.371403 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.371417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.371426 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.473940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.473977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.473988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.474002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.474012 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.576659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.576694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.576705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.576722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.576735 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.678817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.679233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.679333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.679432 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.679522 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.750793 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.750912 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.781393 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.781548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.781613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.781676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.781742 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.857381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.857435 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.857450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.857475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.857492 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.868622 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:49Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.872758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.872792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.872801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.872815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.872825 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.884932 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:49Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.887997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.888129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.888221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.888287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.888346 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.898121 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:49Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.901395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.901425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.901436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.901454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.901466 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.914741 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:49Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.917837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.917860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.917870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.917883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.917892 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.930492 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:49Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:49 crc kubenswrapper[5024]: E1007 12:28:49.930660 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.932406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.932442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.932454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.932473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:49 crc kubenswrapper[5024]: I1007 12:28:49.932484 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:49Z","lastTransitionTime":"2025-10-07T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.034437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.034681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.034766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.034833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.034891 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.137243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.137308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.137319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.137334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.137343 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.239507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.239545 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.239555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.239571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.239581 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.251992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:50 crc kubenswrapper[5024]: E1007 12:28:50.252222 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:50 crc kubenswrapper[5024]: E1007 12:28:50.252323 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:29:22.252306439 +0000 UTC m=+100.328093277 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.341660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.341697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.341708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.341723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.341734 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.443934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.443977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.443987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.444005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.444020 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.546370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.546436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.546454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.546477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.546497 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.648618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.648659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.648669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.648684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.648695 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751006 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:50 crc kubenswrapper[5024]: E1007 12:28:50.751117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:50 crc kubenswrapper[5024]: E1007 12:28:50.751256 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:50 crc kubenswrapper[5024]: E1007 12:28:50.751320 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.751641 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.854052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.854087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.854099 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.854115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.854126 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.956536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.956568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.956580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.956595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:50 crc kubenswrapper[5024]: I1007 12:28:50.956605 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:50Z","lastTransitionTime":"2025-10-07T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.059126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.059180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.059189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.059203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.059212 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.161264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.161291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.161304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.161320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.161329 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.263241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.263278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.263287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.263300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.263309 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.365559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.365592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.365603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.365620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.365630 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.467873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.467904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.467913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.467925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.467934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.569848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.569885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.569893 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.569907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.569916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.672172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.672206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.672215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.672231 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.672242 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.750959 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:51 crc kubenswrapper[5024]: E1007 12:28:51.751075 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.773710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.773749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.773760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.773775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.773787 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.877229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.877261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.877270 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.877287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.877298 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.979595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.979636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.979650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.979667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:51 crc kubenswrapper[5024]: I1007 12:28:51.979681 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:51Z","lastTransitionTime":"2025-10-07T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.082003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.082047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.082056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.082071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.082081 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.147020 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/0.log" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.147156 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1ac3df5-bf16-419a-87c5-9683eebe3506" containerID="ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010" exitCode=1 Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.147186 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerDied","Data":"ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.147557 5024 scope.go:117] "RemoveContainer" containerID="ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.177627 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.184691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.184724 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.184732 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.184746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.184756 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.192028 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.205491 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.220681 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.239404 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.251260 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.260636 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.269982 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.278788 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.286531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.286584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.286599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.286617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.286629 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.287148 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.299412 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.312626 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.324424 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.336483 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.347097 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.359507 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.371979 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.393668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.393714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.393723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.393743 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.393758 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.394961 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.502938 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.502999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.503012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.503032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.503051 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.605495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.605531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.605540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.605553 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.605562 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.709001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.709045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.709057 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.709072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.709084 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.750827 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.750927 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:52 crc kubenswrapper[5024]: E1007 12:28:52.750948 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:52 crc kubenswrapper[5024]: E1007 12:28:52.751105 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.751124 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:52 crc kubenswrapper[5024]: E1007 12:28:52.751205 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.764797 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.776248 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.786706 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.797820 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.811368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.811404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.811415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.811428 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.811438 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.814658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.831546 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.843373 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.855371 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.866840 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.880357 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.890351 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.899417 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.910288 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.913582 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.913619 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.913627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.913641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.913651 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:52Z","lastTransitionTime":"2025-10-07T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.921102 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.930708 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.942732 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.953237 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:52 crc kubenswrapper[5024]: I1007 12:28:52.962773 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:52Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.015299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.015326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.015334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.015347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.015355 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.117618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.117646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.117654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.117667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.117677 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.151562 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/0.log" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.151614 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerStarted","Data":"79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.168750 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.183411 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.193593 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.202693 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.217931 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.219702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.219760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.219773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.219789 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.219800 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.229314 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.237988 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.246335 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.254901 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.263315 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.274028 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.283632 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.293043 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.304130 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.313691 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.323082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.323156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.323170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.323189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.323200 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.327266 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.339684 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.355324 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:53Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.425427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.425473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.425490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.425508 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.425518 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.527298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.527337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.527347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.527362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.527372 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.629640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.629695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.629704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.629718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.629726 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.731277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.731307 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.731314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.731329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.731339 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.750481 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:53 crc kubenswrapper[5024]: E1007 12:28:53.750612 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.833419 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.833466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.833476 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.833489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.833498 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.935902 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.935959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.935971 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.935989 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:53 crc kubenswrapper[5024]: I1007 12:28:53.936001 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:53Z","lastTransitionTime":"2025-10-07T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.038062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.038105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.038115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.038168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.038178 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.140579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.140624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.140635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.140651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.140662 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.243489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.243583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.243604 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.243945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.244198 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.346630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.346668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.346678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.346727 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.346737 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.448985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.449033 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.449046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.449064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.449075 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.551509 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.551538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.551546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.551558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.551566 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.653896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.653940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.653951 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.653965 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.653976 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.751628 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.751676 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.751632 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:54 crc kubenswrapper[5024]: E1007 12:28:54.751777 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:54 crc kubenswrapper[5024]: E1007 12:28:54.751857 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:54 crc kubenswrapper[5024]: E1007 12:28:54.751960 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.755575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.755604 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.755615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.755629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.755639 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.857558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.857597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.857605 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.857624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.857633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.960131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.960200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.960210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.960226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:54 crc kubenswrapper[5024]: I1007 12:28:54.960237 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:54Z","lastTransitionTime":"2025-10-07T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.062382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.062415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.062423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.062474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.062485 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.164433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.164468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.164477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.164491 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.164500 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.266633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.266945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.267014 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.267085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.267178 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.369371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.369424 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.369436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.369455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.369467 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.471682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.471709 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.471717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.471730 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.471738 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.573986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.574022 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.574048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.574062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.574070 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.676614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.676648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.676656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.676674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.676692 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.750841 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:55 crc kubenswrapper[5024]: E1007 12:28:55.750955 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.780090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.780167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.780182 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.780198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.780207 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.882337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.882368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.882376 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.882389 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.882398 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.984780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.984817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.984842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.984857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:55 crc kubenswrapper[5024]: I1007 12:28:55.984865 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:55Z","lastTransitionTime":"2025-10-07T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.087069 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.087108 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.087118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.087131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.087154 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.189494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.189529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.189538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.189552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.189562 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.292817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.292857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.292869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.292884 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.292897 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.395262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.395328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.395340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.395356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.395368 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.498062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.498090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.498097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.498112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.498121 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.600823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.600867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.600878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.600892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.600901 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.703309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.703368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.703379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.703414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.703431 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.753320 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:56 crc kubenswrapper[5024]: E1007 12:28:56.753420 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.753565 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:56 crc kubenswrapper[5024]: E1007 12:28:56.753608 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.753700 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:56 crc kubenswrapper[5024]: E1007 12:28:56.753745 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.805197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.805221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.805229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.805241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.805249 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.907365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.907428 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.907443 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.907459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:56 crc kubenswrapper[5024]: I1007 12:28:56.907470 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:56Z","lastTransitionTime":"2025-10-07T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.010589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.010615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.010638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.010654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.010663 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.113010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.113056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.113065 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.113078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.113087 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.215288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.215353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.215366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.215435 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.215454 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.317585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.317648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.317666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.317692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.317712 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.421488 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.421526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.421535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.421550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.421560 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.523965 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.523997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.524007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.524023 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.524033 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.626998 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.627056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.627073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.627098 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.627117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.729151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.729213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.729231 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.729262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.729278 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.750926 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:57 crc kubenswrapper[5024]: E1007 12:28:57.751018 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.752038 5024 scope.go:117] "RemoveContainer" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.833815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.833876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.833900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.833931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.833954 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.936920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.936964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.936973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.936987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:57 crc kubenswrapper[5024]: I1007 12:28:57.936997 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:57Z","lastTransitionTime":"2025-10-07T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.038947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.038981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.038992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.039009 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.039020 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.141952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.141997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.142006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.142022 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.142032 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.246040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.246081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.246091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.246107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.246146 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.348277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.348321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.348329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.348344 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.348355 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.460106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.460180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.460193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.460212 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.460224 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.562783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.562823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.562834 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.562851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.562863 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.665256 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.665287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.665296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.665310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.665318 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.751427 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.751565 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:28:58 crc kubenswrapper[5024]: E1007 12:28:58.751599 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.751638 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:28:58 crc kubenswrapper[5024]: E1007 12:28:58.751747 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:28:58 crc kubenswrapper[5024]: E1007 12:28:58.751851 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.767998 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.768039 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.768049 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.768066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.768077 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.870750 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.870800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.870813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.870833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.870844 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.972794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.972828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.972838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.972851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:58 crc kubenswrapper[5024]: I1007 12:28:58.972860 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:58Z","lastTransitionTime":"2025-10-07T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.075692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.075739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.075752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.075770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.075782 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.172408 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/3.log" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.173217 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/2.log" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.176483 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" exitCode=1 Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.176529 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.176568 5024 scope.go:117] "RemoveContainer" containerID="6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177247 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:28:59 crc kubenswrapper[5024]: E1007 12:28:59.177445 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.177756 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.204535 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:58Z\\\",\\\"message\\\":\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1007 12:28:58.898409 7053 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.249622ms\\\\nI1007 12:28:58.898463 7053 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1007 12:28:58.898489 7053 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1007 12:28:58.898518 7053 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1007 12:28:58.898579 7053 factory.go:1336] Added *v1.Node event handler 7\\\\nI1007 12:28:58.898616 7053 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1007 12:28:58.898869 7053 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1007 12:28:58.898960 7053 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1007 12:28:58.898989 7053 ovnkube.go:599] Stopped ovnkube\\\\nI1007 12:28:58.899009 7053 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1007 12:28:58.899076 7053 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.221416 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.237173 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.252903 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.268652 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.279863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.279918 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.279932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.279955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.279973 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.286112 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.317705 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.332538 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.344218 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.357703 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.374777 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.382867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.382912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.382923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.382940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.382952 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.391610 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.403858 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.416380 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.433009 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.450644 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.468232 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485363 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.485884 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:59Z is after 2025-08-24T17:21:41Z" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.588253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.588303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.588312 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.588329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.588340 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.692238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.692291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.692302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.692322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.692334 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.750545 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:28:59 crc kubenswrapper[5024]: E1007 12:28:59.750696 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.794818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.794874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.794889 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.794920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.794934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.898469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.898532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.898553 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.898579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:28:59 crc kubenswrapper[5024]: I1007 12:28:59.898598 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:28:59Z","lastTransitionTime":"2025-10-07T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.001968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.002052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.002070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.002101 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.002120 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.076055 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.076097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.076107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.076165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.076179 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.091510 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:00Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.095332 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.095384 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.095397 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.095437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.095452 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.114517 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:00Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.118097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.118160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.118171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.118187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.118198 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.131018 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:00Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.134816 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.134852 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.134862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.134878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.134889 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.148389 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:00Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.152265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.152302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.152313 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.152328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.152338 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.167435 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:00Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.167649 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.170084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.170111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.170120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.170155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.170168 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.182927 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/3.log" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.273641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.273713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.273729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.273753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.273775 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.376034 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.376078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.376087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.376103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.376113 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.478736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.478796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.478807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.478824 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.478834 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.581073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.581125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.581166 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.581185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.581197 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.683279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.683671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.683683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.683702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.683716 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.750832 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.750934 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.750831 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.750971 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.751117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:00 crc kubenswrapper[5024]: E1007 12:29:00.751206 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.786520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.786559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.786568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.786584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.786595 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.888674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.888707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.888718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.888733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.888743 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.991221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.991255 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.991263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.991276 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:00 crc kubenswrapper[5024]: I1007 12:29:00.991286 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:00Z","lastTransitionTime":"2025-10-07T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.093974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.094013 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.094024 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.094038 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.094047 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.195999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.196035 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.196043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.196056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.196065 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.298081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.298114 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.298123 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.298147 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.298156 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.399791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.399836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.399858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.399877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.399887 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.501845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.501886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.501896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.501911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.501921 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.604165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.604447 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.604566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.604734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.604827 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.707442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.707476 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.707485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.707500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.707509 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.751276 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:01 crc kubenswrapper[5024]: E1007 12:29:01.751682 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.810103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.810157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.810169 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.810184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.810195 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.912631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.912657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.912666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.912677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:01 crc kubenswrapper[5024]: I1007 12:29:01.912686 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:01Z","lastTransitionTime":"2025-10-07T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.015263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.015303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.015311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.015324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.015333 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.117703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.118112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.118394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.118578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.118728 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.220737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.220781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.220791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.220807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.220829 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.323057 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.323097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.323107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.323123 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.323153 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.425387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.425416 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.425424 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.425438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.425447 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.527301 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.527342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.527350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.527365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.527376 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.630029 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.630091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.630102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.630117 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.630128 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.733058 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.733092 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.733105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.733120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.733131 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.750684 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.750796 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.750879 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:02 crc kubenswrapper[5024]: E1007 12:29:02.750971 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:02 crc kubenswrapper[5024]: E1007 12:29:02.751185 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:02 crc kubenswrapper[5024]: E1007 12:29:02.751387 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.765819 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.791842 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dff604a546898db06c641fe85aaec7ba7883e439a07e03883540ae3306548fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:29Z\\\",\\\"message\\\":\\\"perator LB template configs for network=default: []services.lbConfig(nil)\\\\nF1007 12:28:29.596578 6680 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:28:29Z is after 2025-08-24T17:21:41Z]\\\\nI1007 12:28:29.597591 6680 services_controller.go:451] Built service openshift-machine-api/machine-api-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:58Z\\\",\\\"message\\\":\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1007 12:28:58.898409 7053 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.249622ms\\\\nI1007 12:28:58.898463 7053 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1007 12:28:58.898489 7053 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1007 12:28:58.898518 7053 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1007 12:28:58.898579 7053 factory.go:1336] Added *v1.Node event handler 7\\\\nI1007 12:28:58.898616 7053 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1007 12:28:58.898869 7053 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1007 12:28:58.898960 7053 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1007 12:28:58.898989 7053 ovnkube.go:599] Stopped ovnkube\\\\nI1007 12:28:58.899009 7053 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1007 12:28:58.899076 7053 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.809318 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.824300 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.835272 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.835308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.835319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.835340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.835353 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.843929 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.858039 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.874856 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.892713 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.905320 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.916661 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.927434 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.937963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.938007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.938015 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.938029 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.938038 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:02Z","lastTransitionTime":"2025-10-07T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.940558 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.952888 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.964770 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.974479 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:02 crc kubenswrapper[5024]: I1007 12:29:02.987658 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.000242 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:02Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.031432 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:03Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.040592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.040637 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.040647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.040661 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.040670 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.143086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.143165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.143174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.143189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.143199 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.245046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.245097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.245115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.245159 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.245169 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.347572 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.347610 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.347619 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.347632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.347644 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.449554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.449603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.449614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.449630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.449647 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.551839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.551981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.552000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.552019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.552032 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.653527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.653560 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.653569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.653583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.653591 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.751241 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:03 crc kubenswrapper[5024]: E1007 12:29:03.751497 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.755835 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.755877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.755887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.755906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.755922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.858195 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.858228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.858237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.858252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.858261 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.960395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.960444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.960461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.960485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:03 crc kubenswrapper[5024]: I1007 12:29:03.960502 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:03Z","lastTransitionTime":"2025-10-07T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.062674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.062710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.062718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.062732 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.062741 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.166472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.166782 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.166813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.166837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.166850 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.268559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.268600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.268611 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.268628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.268639 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.370944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.370975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.370984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.370997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.371007 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.472999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.473271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.473288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.473306 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.473324 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.575441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.575480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.575502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.575519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.575529 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.677863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.677911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.677922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.677936 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.677948 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.751017 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.751033 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.751031 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:04 crc kubenswrapper[5024]: E1007 12:29:04.751284 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:04 crc kubenswrapper[5024]: E1007 12:29:04.751324 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:04 crc kubenswrapper[5024]: E1007 12:29:04.751658 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.779686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.779751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.779767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.779790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.779807 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.881731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.881780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.881791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.881809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.881821 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.983731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.983772 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.983783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.983798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:04 crc kubenswrapper[5024]: I1007 12:29:04.983809 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:04Z","lastTransitionTime":"2025-10-07T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.086245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.086285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.086297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.086314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.086322 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.188825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.189081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.189189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.189282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.189367 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.291376 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.291420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.291428 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.291445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.291454 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.394117 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.394177 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.394188 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.394200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.394211 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.497040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.497085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.497096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.497112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.497123 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.599737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.599781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.599790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.599804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.599813 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.701729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.701779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.701791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.701808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.701819 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.750592 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:05 crc kubenswrapper[5024]: E1007 12:29:05.750821 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.760734 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.803792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.803844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.803857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.803874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.803886 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.815986 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.816787 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:29:05 crc kubenswrapper[5024]: E1007 12:29:05.816949 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.830919 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6386494d954b26c79eb21b79ba0fc104be40e2f88f706b39be6e5128db9052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.843548 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.854615 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2cd5d8f-e46b-495b-8ee9-45fccf0a5eb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670135e274ef583cd074f1d5f07b59626278cba64f32273fe44b3ee8e8767918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4d0527470b130f88e3c0e84d67de9853deaee1e26d000187ff7328edd4b3d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4d0527470b130f88e3c0e84d67de9853deaee1e26d000187ff7328edd4b3d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.867219 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.879885 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45f0db06f953b15ef024e4f8ef5a8175e5902dd562f3f7d2837b9ed340515856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d94f53f0ffde1601eb1e662fea762e748d6d8ef8d13d3889b81f785584490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.895792 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rwxtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1ac3df5-bf16-419a-87c5-9683eebe3506\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:51Z\\\",\\\"message\\\":\\\"2025-10-07T12:28:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c\\\\n2025-10-07T12:28:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ddfcb17e-eb5f-4265-bb5f-76da035ef58c to /host/opt/cni/bin/\\\\n2025-10-07T12:28:06Z [verbose] multus-daemon started\\\\n2025-10-07T12:28:06Z [verbose] Readiness Indicator file check\\\\n2025-10-07T12:28:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rwxtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.906531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.906589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.906606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.906630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.906647 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:05Z","lastTransitionTime":"2025-10-07T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.917761 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da5e4e6d-289a-4fc4-9672-2709c87b5258\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-07T12:28:58Z\\\",\\\"message\\\":\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI1007 12:28:58.898409 7053 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.249622ms\\\\nI1007 12:28:58.898463 7053 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1007 12:28:58.898489 7053 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1007 12:28:58.898518 7053 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1007 12:28:58.898579 7053 factory.go:1336] Added *v1.Node event handler 7\\\\nI1007 12:28:58.898616 7053 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1007 12:28:58.898869 7053 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1007 12:28:58.898960 7053 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1007 12:28:58.898989 7053 ovnkube.go:599] Stopped ovnkube\\\\nI1007 12:28:58.899009 7053 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1007 12:28:58.899076 7053 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfsgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9b4h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.931195 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9d26fa0397d7028f5366c74ad9e70ea2bc9a4e22927eda42676ecbf1be5dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.946994 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.958819 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21da76ab-55e5-416e-891b-d957ac89eea5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66656a6c38b82bc02c0b78cdaefa1a9a500bb254060f80484a8e0b49b6c9c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f694d3d17dd019f52466ca9bf54aa5ed0dbd08d5da19e8aeb178ed4cb6950ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a22d92a862986a48ad0f85fb01b5efafe75d413e4d01335bb15735c5041312d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586688fc27177f8d9778bcc5d94baee820316f0510afb7516084c3c91aa0cce4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.974247 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf42b512-d15d-4479-b83d-0f010c08f71b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b28eef295f695c0825d7dc0fa49ab5bf0b555ab62e94dd749dae2a20a5026c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca59ea1444a78e29f74f86b381c54dd111bb12f84d9f7f5eb1d528f605af81c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad05394fbaf6a0ec1639fbf66246b31263dd324edbbec4fe392ddd20d0ef1a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c653430bd4fd3560d45ba7f326ef0ebe1f3c64184c9cd6518417506eb6fd37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:05 crc kubenswrapper[5024]: I1007 12:29:05.988013 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-blx4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911ebab3-c489-4067-b3af-80e52173c9b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d37bf017bd31dbbe180a6ac44f4953e2bfe47013e3a9f1f1e5a7989cfb694d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7437b5719c4826ac9e4f92151de050acda760d90e5830c97d382213cb522b6e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9642279ae35f0db250072a4e7ee2364cdb2409c92f84aa4da52ac13f9f476f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b12834dbe0eaaca2ad432e61f33ab8ea6ca77a2f3eddf357b716acfafc70ebc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb7eb9e0993ca5991ce85e4847e314ce7da7c6e4a424e9f6d322e25f5c721bba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5da993525620ce8a232a9b6e22d805a2da47aa6b4d6eba8f38983d17224a909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef50c0bd579d2bb81ca8df3a9e5acc49fa1bdc82b9515ca70d75507ce207be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vstdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-blx4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:05Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.008770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.008811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.008829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.008854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.008865 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.017390 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954634a5-c594-4093-a66d-bedc27dbfc35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7608e5cdb1066bef25ac7368309879c098fc225893dfc377141fdfc1934fa75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b17db35e20837cddd21830705babe3a4749224585f770067d7e653542e3c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43551ee6099f762642499cadea8918da2327473e3bbd4b1ce14d7410d7c9bbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e2f1796e3a783688f14d84392c8d75b74404735871b6e029e73592c74b3ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e81659de5f867433cedb9dcc91befedffa99b854aff6d8a36f5049c5a2fb70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670e8ac2586280c7c39a2785a561be71f8ae5226535117b6b0e96a125935e88d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0e35aff796da2313079cb5e53a491e186647fbfea7d192bbcdfaed755e0e8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8a33d156a108ce7c5255b7e625be6236073d88dc3c49b1a20fba441aa8c6413\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.036348 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4b86b2-bde4-43b0-9d76-cd2dfba9b698\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:27:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a825fbb9e95f2d290e810f115c3a6cd001001c9563f76e52b3e0af63f24d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35290de8cd9a7bb2e4e98d100ca8ed398b5963650e2e7d94eda17487f4e0277\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0c90f359dfb06e9cdc6cc7c06dd8b29f92abd82342cc906a0cd31899b8de06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94affa67f188b30b1a615c095dfba24d16da6a2edf11ff7706369b15b51a3889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4671a28bce91d2cd0cec3f97f0f196f865fe832801ce4ea5d899abcfe694070\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-07T12:27:56Z\\\",\\\"message\\\":\\\"W1007 12:27:45.883685 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1007 12:27:45.884004 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759840065 cert, and key in /tmp/serving-cert-1058807071/serving-signer.crt, /tmp/serving-cert-1058807071/serving-signer.key\\\\nI1007 12:27:46.171301 1 observer_polling.go:159] Starting file observer\\\\nW1007 12:27:46.175943 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1007 12:27:46.176178 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1007 12:27:46.179898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1058807071/tls.crt::/tmp/serving-cert-1058807071/tls.key\\\\\\\"\\\\nF1007 12:27:56.428640 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b089f3678099010d3bfe6cc8fdc37bed56e5d4e42f1f94315ccd6ed087d54fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:27:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe155fe19e6bfb9e9f388f40c1331ba1c95117710f6d757192a043287cca2e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-07T12:27:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-07T12:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:27:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.050316 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-flt2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b45b7f-7a47-4643-87f1-4aa98912c0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e2ba699db09a600b357eb93e7fe87802a4757b9e6220913040d69a94af1699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frszd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-flt2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.061379 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295aa2ac-5da1-4e47-ad64-b8e7c34e576a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dc6860b4d42d22ef807fe55ffa5981a6b9751db0b7626415f075387807019ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149adefb2db413aed81eb5aea43cb4c659b0c2efcb6394f3dc80ea73c7a775a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smp4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzcv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.073601 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmrt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gtmmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.084554 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"273432b3-0436-4a74-afa3-7070f9bf5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc3d1ffae595fb3f674321448f983261b72b4b28abf5d9ad9d61404743a6ac8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2nr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t95cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.096262 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lzg82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6d4cde-c984-4157-b9e1-25d800a74264\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-07T12:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc16f7522670a81e6703a52cc60624e3115973e9c733d292bee76eaea228b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-07T12:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56rq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-07T12:28:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lzg82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:06Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.119801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.119847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.119859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.119875 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.119887 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.222556 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.222594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.222606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.222625 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.222638 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.325109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.325182 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.325221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.325241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.325252 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.427583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.427622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.427629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.427645 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.427654 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.512489 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.512615 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.512591745 +0000 UTC m=+148.588378583 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.512651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.512723 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.512749 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.512772 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.512849 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.512888 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.512878663 +0000 UTC m=+148.588665501 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513087 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513119 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.51310955 +0000 UTC m=+148.588896398 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513297 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513314 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513325 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513353 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.513344516 +0000 UTC m=+148.589131354 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513350 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513392 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513409 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.513477 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.513457209 +0000 UTC m=+148.589244077 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.530118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.530170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.530183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.530198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.530207 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.632843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.632879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.632888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.632904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.632914 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.735229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.735278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.735293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.735315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.735333 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.751272 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.751323 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.751375 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.751416 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.751532 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:06 crc kubenswrapper[5024]: E1007 12:29:06.751604 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.837399 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.837456 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.837467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.837484 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.837497 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.939763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.939798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.939808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.939840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:06 crc kubenswrapper[5024]: I1007 12:29:06.939852 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:06Z","lastTransitionTime":"2025-10-07T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.041962 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.041994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.042002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.042015 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.042025 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.144520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.144554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.144563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.144577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.144585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.246661 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.246698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.246710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.246728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.246741 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.349024 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.349089 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.349111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.349178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.349204 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.451650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.451678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.451686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.451702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.451711 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.554349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.554380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.554389 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.554401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.554410 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.657103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.657132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.657164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.657180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.657211 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.750921 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:07 crc kubenswrapper[5024]: E1007 12:29:07.751090 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.760289 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.760348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.760369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.760394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.760412 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.862350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.862380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.862388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.862402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.862410 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.964767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.964811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.964823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.964844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:07 crc kubenswrapper[5024]: I1007 12:29:07.964859 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:07Z","lastTransitionTime":"2025-10-07T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.067309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.067340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.067349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.067363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.067371 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.169831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.169872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.169887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.169907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.169922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.271616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.271659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.271675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.271693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.271703 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.373703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.373751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.373767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.373790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.373807 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.476548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.476588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.476598 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.476612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.476622 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.579308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.579350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.579361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.579384 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.579394 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.682577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.682648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.682675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.682709 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.682804 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.751276 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.751365 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:08 crc kubenswrapper[5024]: E1007 12:29:08.751408 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:08 crc kubenswrapper[5024]: E1007 12:29:08.751584 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.751939 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:08 crc kubenswrapper[5024]: E1007 12:29:08.752093 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.785677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.785728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.785742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.785763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.785778 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.888702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.888847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.888868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.888917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.888936 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.991648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.991721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.991733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.991754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:08 crc kubenswrapper[5024]: I1007 12:29:08.991768 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:08Z","lastTransitionTime":"2025-10-07T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.094704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.094777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.094797 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.094823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.094847 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.197792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.197840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.197851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.197869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.197883 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.300631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.300679 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.300689 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.300703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.300711 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.402620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.402659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.402669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.402685 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.402695 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.553688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.553730 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.553739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.553755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.553765 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.656223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.656262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.656270 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.656284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.656293 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.751324 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:09 crc kubenswrapper[5024]: E1007 12:29:09.751529 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.757864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.757914 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.757928 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.757945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.757956 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.860204 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.860252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.860264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.860298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.860309 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.962849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.962887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.962896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.962911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:09 crc kubenswrapper[5024]: I1007 12:29:09.962920 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:09Z","lastTransitionTime":"2025-10-07T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.065032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.065072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.065080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.065095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.065105 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.167540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.167584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.167593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.167606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.167615 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.270298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.270352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.270362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.270380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.270392 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.372801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.372854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.372865 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.372885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.372898 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.460334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.460382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.460398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.460412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.460421 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.472075 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.475099 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.475155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.475168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.475185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.475197 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.493089 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.496050 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.496088 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.496097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.496111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.496122 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.512563 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.516000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.518666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.518698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.518721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.518734 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.531708 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.535073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.535117 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.535131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.535184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.535195 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.546228 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-07T12:29:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6f479552-acfd-496b-8406-45ea4b4aa6ef\\\",\\\"systemUUID\\\":\\\"b2d72a02-4b40-4530-9891-327ad0d24531\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-07T12:29:10Z is after 2025-08-24T17:21:41Z" Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.546562 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.548205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.548242 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.548251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.548264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.548274 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.650608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.650641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.650650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.650663 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.650673 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.750909 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.751015 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.751212 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.751206 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.751392 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:10 crc kubenswrapper[5024]: E1007 12:29:10.751641 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.752510 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.752538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.752549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.752586 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.752599 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.854288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.854332 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.854350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.854369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.854379 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.956976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.957014 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.957022 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.957037 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:10 crc kubenswrapper[5024]: I1007 12:29:10.957046 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:10Z","lastTransitionTime":"2025-10-07T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.059739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.060064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.060084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.060105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.060118 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.162962 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.163023 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.163039 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.163064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.163080 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.264816 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.264858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.264869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.264885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.264895 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.366913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.366955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.366966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.366985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.367003 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.469665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.469729 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.469751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.469779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.469800 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.572589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.572655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.572675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.572702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.572722 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.675914 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.675960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.675978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.676000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.676017 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.751437 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:11 crc kubenswrapper[5024]: E1007 12:29:11.751752 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.779498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.779559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.779584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.779613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.779637 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.883081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.883171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.883192 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.883216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.883235 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.986309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.986390 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.986407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.986430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:11 crc kubenswrapper[5024]: I1007 12:29:11.986452 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:11Z","lastTransitionTime":"2025-10-07T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.089442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.089479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.089490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.089506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.089516 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.191337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.191400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.191418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.191444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.191462 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.293276 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.293336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.293353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.293380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.293398 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.395467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.395528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.395546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.395571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.395593 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.497748 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.497792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.497804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.497821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.497832 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.600221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.600537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.600554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.600574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.600582 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.703279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.703318 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.703327 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.703343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.703353 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.751298 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.751365 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:12 crc kubenswrapper[5024]: E1007 12:29:12.751499 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.751578 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:12 crc kubenswrapper[5024]: E1007 12:29:12.751727 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:12 crc kubenswrapper[5024]: E1007 12:29:12.752089 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.808959 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.808931475 podStartE2EDuration="1m8.808931475s" podCreationTimestamp="2025-10-07 12:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.808438141 +0000 UTC m=+90.884224979" watchObservedRunningTime="2025-10-07 12:29:12.808931475 +0000 UTC m=+90.884718353" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.809090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.809360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.809372 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.809387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.809397 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.845394 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.84537221 podStartE2EDuration="1m10.84537221s" podCreationTimestamp="2025-10-07 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.825988605 +0000 UTC m=+90.901775443" watchObservedRunningTime="2025-10-07 12:29:12.84537221 +0000 UTC m=+90.921159048" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.861609 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.861591687 podStartE2EDuration="1m6.861591687s" podCreationTimestamp="2025-10-07 12:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.845904634 +0000 UTC m=+90.921691472" watchObservedRunningTime="2025-10-07 12:29:12.861591687 +0000 UTC m=+90.937378535" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.861750 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.861744561 podStartE2EDuration="38.861744561s" podCreationTimestamp="2025-10-07 12:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.86099897 +0000 UTC m=+90.936785808" watchObservedRunningTime="2025-10-07 12:29:12.861744561 +0000 UTC m=+90.937531409" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.902569 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podStartSLOduration=69.902551826 podStartE2EDuration="1m9.902551826s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.901901778 +0000 UTC m=+90.977688606" watchObservedRunningTime="2025-10-07 12:29:12.902551826 +0000 UTC m=+90.978338654" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.902669 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-blx4r" podStartSLOduration=69.902665539 podStartE2EDuration="1m9.902665539s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.886628917 +0000 UTC m=+90.962415755" watchObservedRunningTime="2025-10-07 12:29:12.902665539 +0000 UTC m=+90.978452377" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.911775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.911802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.911810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.911823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.911831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:12Z","lastTransitionTime":"2025-10-07T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.913163 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lzg82" podStartSLOduration=69.913152918 podStartE2EDuration="1m9.913152918s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.912719746 +0000 UTC m=+90.988506584" watchObservedRunningTime="2025-10-07 12:29:12.913152918 +0000 UTC m=+90.988939756" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.924851 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-flt2r" podStartSLOduration=69.92483367 podStartE2EDuration="1m9.92483367s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.924702077 +0000 UTC m=+91.000488955" watchObservedRunningTime="2025-10-07 12:29:12.92483367 +0000 UTC m=+91.000620528" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.949652 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzcv6" podStartSLOduration=69.949633094 podStartE2EDuration="1m9.949633094s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.936660487 +0000 UTC m=+91.012447365" watchObservedRunningTime="2025-10-07 12:29:12.949633094 +0000 UTC m=+91.025419922" Oct 07 12:29:12 crc kubenswrapper[5024]: I1007 12:29:12.973864 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.973849462 podStartE2EDuration="7.973849462s" podCreationTimestamp="2025-10-07 12:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:12.962197481 +0000 UTC m=+91.037984319" watchObservedRunningTime="2025-10-07 12:29:12.973849462 +0000 UTC m=+91.049636300" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.014637 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.014680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.014692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.014706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.014718 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.053921 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rwxtd" podStartSLOduration=70.053899809 podStartE2EDuration="1m10.053899809s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:13.053756045 +0000 UTC m=+91.129542883" watchObservedRunningTime="2025-10-07 12:29:13.053899809 +0000 UTC m=+91.129686647" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.116418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.116457 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.116467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.116484 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.116495 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.219478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.219528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.219538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.219554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.219563 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.322201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.322494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.322588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.322718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.322813 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.441543 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.441584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.441593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.441607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.441616 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.544183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.544697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.544792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.544871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.544937 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.647431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.647733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.647818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.647900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.647966 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750559 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.750748 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: E1007 12:29:13.750849 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.853211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.853257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.853268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.853285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.853299 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.956026 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.956091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.956109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.956158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:13 crc kubenswrapper[5024]: I1007 12:29:13.956175 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:13Z","lastTransitionTime":"2025-10-07T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.060702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.060779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.060800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.060825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.060843 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.163542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.163605 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.163626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.163650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.163667 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.266114 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.266160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.266174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.266193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.266211 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.368651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.368695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.368706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.368724 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.368737 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.470648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.470686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.470694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.470708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.470718 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.573544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.573586 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.573599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.573616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.573627 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.676178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.676206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.676214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.676226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.676235 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.751376 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.751403 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.751504 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:14 crc kubenswrapper[5024]: E1007 12:29:14.751665 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:14 crc kubenswrapper[5024]: E1007 12:29:14.751752 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:14 crc kubenswrapper[5024]: E1007 12:29:14.751860 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.778263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.778299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.778308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.778323 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.778333 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.880410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.880454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.880468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.880495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.880506 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.983411 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.983473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.983489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.983513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:14 crc kubenswrapper[5024]: I1007 12:29:14.983538 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:14Z","lastTransitionTime":"2025-10-07T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.087247 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.087314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.087334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.087385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.087407 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.189642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.189716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.189736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.189760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.189777 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.292180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.292225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.292234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.292248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.292258 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.394291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.394340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.394352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.394391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.394407 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.496492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.496557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.496568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.496585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.496594 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.599503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.599545 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.599557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.599574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.599584 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.702568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.702618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.702636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.702658 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.702731 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.750461 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:15 crc kubenswrapper[5024]: E1007 12:29:15.750583 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.805775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.805814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.805824 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.805839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.805850 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.907864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.907912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.907945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.907961 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:15 crc kubenswrapper[5024]: I1007 12:29:15.907973 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:15Z","lastTransitionTime":"2025-10-07T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.010437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.010464 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.010471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.010484 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.010493 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.112441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.112474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.112482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.112498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.112507 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.215810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.215850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.215861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.215876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.215887 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.319273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.319338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.319350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.319366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.319379 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.421636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.421688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.421700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.421720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.421730 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.524631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.524693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.524715 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.524744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.524765 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.627711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.627758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.627770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.627788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.627800 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.731614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.731684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.731706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.731734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.731755 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.751412 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.751453 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.751467 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:16 crc kubenswrapper[5024]: E1007 12:29:16.751643 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:16 crc kubenswrapper[5024]: E1007 12:29:16.751705 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:16 crc kubenswrapper[5024]: E1007 12:29:16.751776 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.834497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.834565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.834587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.834615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.834636 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.937082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.937317 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.937328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.937347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:16 crc kubenswrapper[5024]: I1007 12:29:16.937355 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:16Z","lastTransitionTime":"2025-10-07T12:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.040475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.040547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.040567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.040595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.040613 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.143080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.143157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.143174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.143198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.143214 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.245298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.245326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.245333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.245345 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.245354 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.348699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.348814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.348838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.348866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.348885 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.451548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.451656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.451687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.451704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.451718 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.553694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.553735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.553745 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.553761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.553772 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.656742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.656793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.656806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.656822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.656842 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.750428 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:17 crc kubenswrapper[5024]: E1007 12:29:17.750563 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.759210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.759281 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.759300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.759326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.759344 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.862355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.862433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.862452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.862479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.862499 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.965706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.965766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.965783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.965806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:17 crc kubenswrapper[5024]: I1007 12:29:17.965823 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:17Z","lastTransitionTime":"2025-10-07T12:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.068299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.068343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.068351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.068367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.068378 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.171105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.171184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.171197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.171213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.171225 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.274623 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.274666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.274677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.274695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.274706 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.377335 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.377381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.377392 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.377416 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.377429 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.480238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.480282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.480292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.480308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.480320 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.583371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.583416 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.583427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.583445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.583456 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.685880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.685933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.685945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.685963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.685975 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.750753 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.750823 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.751381 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:18 crc kubenswrapper[5024]: E1007 12:29:18.751515 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:18 crc kubenswrapper[5024]: E1007 12:29:18.751799 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:18 crc kubenswrapper[5024]: E1007 12:29:18.752291 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.752872 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:29:18 crc kubenswrapper[5024]: E1007 12:29:18.753128 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.787973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.788032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.788045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.788061 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.788070 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.890430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.890476 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.890485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.890499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.890511 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.992916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.992979 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.992998 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.993026 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:18 crc kubenswrapper[5024]: I1007 12:29:18.993044 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:18Z","lastTransitionTime":"2025-10-07T12:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.095898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.095954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.095967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.095996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.096018 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.198216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.198292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.198315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.198348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.198370 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.301717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.301762 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.301778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.301804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.301820 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.404750 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.404783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.404792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.404807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.404818 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.506993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.507027 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.507037 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.507053 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.507063 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.609369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.609415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.609423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.609438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.609448 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.711958 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.712005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.712019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.712034 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.712044 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.751112 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:19 crc kubenswrapper[5024]: E1007 12:29:19.751304 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.814796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.814834 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.814846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.814863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.814877 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.917096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.917170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.917187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.917212 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:19 crc kubenswrapper[5024]: I1007 12:29:19.917228 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:19Z","lastTransitionTime":"2025-10-07T12:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.022991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.023042 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.023055 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.023074 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.023088 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.125843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.126111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.126252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.126354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.126435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.228775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.229241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.229421 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.229586 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.229746 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.332419 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.332495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.332519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.332552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.332577 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.435360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.435433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.435455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.435486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.435509 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.538914 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.538966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.538975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.538990 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.539000 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.598025 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.598068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.598085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.598106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.598118 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-07T12:29:20Z","lastTransitionTime":"2025-10-07T12:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.647276 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk"] Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.647665 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.651011 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.651216 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.651395 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.652695 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.751360 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.751407 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:20 crc kubenswrapper[5024]: E1007 12:29:20.751456 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:20 crc kubenswrapper[5024]: E1007 12:29:20.751573 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.751995 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:20 crc kubenswrapper[5024]: E1007 12:29:20.752171 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.763574 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a703ae-58c4-4520-89bb-9d16c1d604a0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.763603 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a703ae-58c4-4520-89bb-9d16c1d604a0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.763623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.763640 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.763700 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a703ae-58c4-4520-89bb-9d16c1d604a0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.865100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a703ae-58c4-4520-89bb-9d16c1d604a0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.865508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a703ae-58c4-4520-89bb-9d16c1d604a0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.865684 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a703ae-58c4-4520-89bb-9d16c1d604a0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.865853 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.866003 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.866202 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.866548 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87a703ae-58c4-4520-89bb-9d16c1d604a0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.866917 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/87a703ae-58c4-4520-89bb-9d16c1d604a0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.871945 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a703ae-58c4-4520-89bb-9d16c1d604a0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.890248 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a703ae-58c4-4520-89bb-9d16c1d604a0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7bxvk\" (UID: \"87a703ae-58c4-4520-89bb-9d16c1d604a0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:20 crc kubenswrapper[5024]: I1007 12:29:20.967555 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" Oct 07 12:29:21 crc kubenswrapper[5024]: I1007 12:29:21.250235 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" event={"ID":"87a703ae-58c4-4520-89bb-9d16c1d604a0","Type":"ContainerStarted","Data":"a78de4c87a701a8f145c7080831053c236143baa8c5c1af574c1250ab9e5aaca"} Oct 07 12:29:21 crc kubenswrapper[5024]: I1007 12:29:21.250602 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" event={"ID":"87a703ae-58c4-4520-89bb-9d16c1d604a0","Type":"ContainerStarted","Data":"c93106391bcebe8c02b89371d5dd98ee679d64a1a1bd02845d1beaf66758d3dd"} Oct 07 12:29:21 crc kubenswrapper[5024]: I1007 12:29:21.265345 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7bxvk" podStartSLOduration=78.265326731 podStartE2EDuration="1m18.265326731s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:21.264264792 +0000 UTC m=+99.340051630" watchObservedRunningTime="2025-10-07 12:29:21.265326731 +0000 UTC m=+99.341113569" Oct 07 12:29:21 crc kubenswrapper[5024]: I1007 12:29:21.751049 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:21 crc kubenswrapper[5024]: E1007 12:29:21.751230 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:22 crc kubenswrapper[5024]: I1007 12:29:22.283283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:22 crc kubenswrapper[5024]: E1007 12:29:22.283640 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:29:22 crc kubenswrapper[5024]: E1007 12:29:22.284538 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs podName:ac027a0c-8461-4ea2-9a6e-40b4af6721b9 nodeName:}" failed. No retries permitted until 2025-10-07 12:30:26.284514003 +0000 UTC m=+164.360300841 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs") pod "network-metrics-daemon-gtmmn" (UID: "ac027a0c-8461-4ea2-9a6e-40b4af6721b9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 07 12:29:22 crc kubenswrapper[5024]: I1007 12:29:22.751410 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:22 crc kubenswrapper[5024]: I1007 12:29:22.751464 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:22 crc kubenswrapper[5024]: I1007 12:29:22.751473 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:22 crc kubenswrapper[5024]: E1007 12:29:22.752653 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:22 crc kubenswrapper[5024]: E1007 12:29:22.752755 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:22 crc kubenswrapper[5024]: E1007 12:29:22.752832 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:23 crc kubenswrapper[5024]: I1007 12:29:23.755349 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:23 crc kubenswrapper[5024]: E1007 12:29:23.755477 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:24 crc kubenswrapper[5024]: I1007 12:29:24.750514 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:24 crc kubenswrapper[5024]: I1007 12:29:24.750575 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:24 crc kubenswrapper[5024]: I1007 12:29:24.750631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:24 crc kubenswrapper[5024]: E1007 12:29:24.750736 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:24 crc kubenswrapper[5024]: E1007 12:29:24.750803 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:24 crc kubenswrapper[5024]: E1007 12:29:24.750915 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:25 crc kubenswrapper[5024]: I1007 12:29:25.751083 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:25 crc kubenswrapper[5024]: E1007 12:29:25.751232 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:26 crc kubenswrapper[5024]: I1007 12:29:26.750608 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:26 crc kubenswrapper[5024]: I1007 12:29:26.750765 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:26 crc kubenswrapper[5024]: E1007 12:29:26.750827 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:26 crc kubenswrapper[5024]: I1007 12:29:26.750843 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:26 crc kubenswrapper[5024]: E1007 12:29:26.751009 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:26 crc kubenswrapper[5024]: E1007 12:29:26.751244 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:27 crc kubenswrapper[5024]: I1007 12:29:27.750505 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:27 crc kubenswrapper[5024]: E1007 12:29:27.750719 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:28 crc kubenswrapper[5024]: I1007 12:29:28.750748 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:28 crc kubenswrapper[5024]: E1007 12:29:28.750861 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:28 crc kubenswrapper[5024]: I1007 12:29:28.750908 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:28 crc kubenswrapper[5024]: E1007 12:29:28.751111 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:28 crc kubenswrapper[5024]: I1007 12:29:28.751165 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:28 crc kubenswrapper[5024]: E1007 12:29:28.751238 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:29 crc kubenswrapper[5024]: I1007 12:29:29.750965 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:29 crc kubenswrapper[5024]: E1007 12:29:29.751087 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:30 crc kubenswrapper[5024]: I1007 12:29:30.750682 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:30 crc kubenswrapper[5024]: I1007 12:29:30.750761 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:30 crc kubenswrapper[5024]: I1007 12:29:30.750920 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:30 crc kubenswrapper[5024]: E1007 12:29:30.751016 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:30 crc kubenswrapper[5024]: E1007 12:29:30.751240 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:30 crc kubenswrapper[5024]: E1007 12:29:30.751300 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:30 crc kubenswrapper[5024]: I1007 12:29:30.752254 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:29:30 crc kubenswrapper[5024]: E1007 12:29:30.752471 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9b4h6_openshift-ovn-kubernetes(da5e4e6d-289a-4fc4-9672-2709c87b5258)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" Oct 07 12:29:31 crc kubenswrapper[5024]: I1007 12:29:31.750727 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:31 crc kubenswrapper[5024]: E1007 12:29:31.751382 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:32 crc kubenswrapper[5024]: I1007 12:29:32.750918 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:32 crc kubenswrapper[5024]: I1007 12:29:32.751482 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:32 crc kubenswrapper[5024]: E1007 12:29:32.751624 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:32 crc kubenswrapper[5024]: I1007 12:29:32.751764 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:32 crc kubenswrapper[5024]: E1007 12:29:32.752945 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:32 crc kubenswrapper[5024]: E1007 12:29:32.753078 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:33 crc kubenswrapper[5024]: I1007 12:29:33.750769 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:33 crc kubenswrapper[5024]: E1007 12:29:33.751077 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:34 crc kubenswrapper[5024]: I1007 12:29:34.750709 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:34 crc kubenswrapper[5024]: E1007 12:29:34.751301 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:34 crc kubenswrapper[5024]: I1007 12:29:34.750761 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:34 crc kubenswrapper[5024]: E1007 12:29:34.751547 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:34 crc kubenswrapper[5024]: I1007 12:29:34.750722 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:34 crc kubenswrapper[5024]: E1007 12:29:34.751710 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:35 crc kubenswrapper[5024]: I1007 12:29:35.750708 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:35 crc kubenswrapper[5024]: E1007 12:29:35.750807 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:36 crc kubenswrapper[5024]: I1007 12:29:36.751282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:36 crc kubenswrapper[5024]: I1007 12:29:36.751363 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:36 crc kubenswrapper[5024]: E1007 12:29:36.751430 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:36 crc kubenswrapper[5024]: E1007 12:29:36.751509 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:36 crc kubenswrapper[5024]: I1007 12:29:36.751582 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:36 crc kubenswrapper[5024]: E1007 12:29:36.751716 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:37 crc kubenswrapper[5024]: I1007 12:29:37.750724 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:37 crc kubenswrapper[5024]: E1007 12:29:37.750924 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.300940 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/1.log" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.301511 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/0.log" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.301567 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1ac3df5-bf16-419a-87c5-9683eebe3506" containerID="79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe" exitCode=1 Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.301597 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerDied","Data":"79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe"} Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.301629 5024 scope.go:117] "RemoveContainer" containerID="ea9a6fd2bd205c446193cf4367b2b55f425460f5ca3daf6de51099bb57c13010" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.302001 5024 scope.go:117] "RemoveContainer" containerID="79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe" Oct 07 12:29:38 crc kubenswrapper[5024]: E1007 12:29:38.302155 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rwxtd_openshift-multus(f1ac3df5-bf16-419a-87c5-9683eebe3506)\"" pod="openshift-multus/multus-rwxtd" podUID="f1ac3df5-bf16-419a-87c5-9683eebe3506" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.751439 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.751506 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:38 crc kubenswrapper[5024]: I1007 12:29:38.751719 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:38 crc kubenswrapper[5024]: E1007 12:29:38.751843 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:38 crc kubenswrapper[5024]: E1007 12:29:38.751883 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:38 crc kubenswrapper[5024]: E1007 12:29:38.751944 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:39 crc kubenswrapper[5024]: I1007 12:29:39.305899 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/1.log" Oct 07 12:29:39 crc kubenswrapper[5024]: I1007 12:29:39.750498 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:39 crc kubenswrapper[5024]: E1007 12:29:39.751046 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:40 crc kubenswrapper[5024]: I1007 12:29:40.751338 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:40 crc kubenswrapper[5024]: I1007 12:29:40.751411 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:40 crc kubenswrapper[5024]: E1007 12:29:40.751684 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:40 crc kubenswrapper[5024]: I1007 12:29:40.751432 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:40 crc kubenswrapper[5024]: E1007 12:29:40.751801 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:40 crc kubenswrapper[5024]: E1007 12:29:40.751898 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:41 crc kubenswrapper[5024]: I1007 12:29:41.750837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:41 crc kubenswrapper[5024]: E1007 12:29:41.750956 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:42 crc kubenswrapper[5024]: E1007 12:29:42.705981 5024 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 07 12:29:42 crc kubenswrapper[5024]: I1007 12:29:42.751259 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:42 crc kubenswrapper[5024]: I1007 12:29:42.751319 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:42 crc kubenswrapper[5024]: I1007 12:29:42.751282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:42 crc kubenswrapper[5024]: E1007 12:29:42.752592 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:42 crc kubenswrapper[5024]: E1007 12:29:42.752732 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:42 crc kubenswrapper[5024]: E1007 12:29:42.752803 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:42 crc kubenswrapper[5024]: E1007 12:29:42.864521 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 07 12:29:43 crc kubenswrapper[5024]: I1007 12:29:43.750871 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:43 crc kubenswrapper[5024]: E1007 12:29:43.751352 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:44 crc kubenswrapper[5024]: I1007 12:29:44.750892 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:44 crc kubenswrapper[5024]: I1007 12:29:44.750993 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:44 crc kubenswrapper[5024]: E1007 12:29:44.751034 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:44 crc kubenswrapper[5024]: I1007 12:29:44.750890 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:44 crc kubenswrapper[5024]: E1007 12:29:44.751121 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:44 crc kubenswrapper[5024]: E1007 12:29:44.751541 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:44 crc kubenswrapper[5024]: I1007 12:29:44.752430 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:29:45 crc kubenswrapper[5024]: I1007 12:29:45.326641 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/3.log" Oct 07 12:29:45 crc kubenswrapper[5024]: I1007 12:29:45.329266 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerStarted","Data":"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120"} Oct 07 12:29:45 crc kubenswrapper[5024]: I1007 12:29:45.329588 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:29:45 crc kubenswrapper[5024]: I1007 12:29:45.751419 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:45 crc kubenswrapper[5024]: E1007 12:29:45.751537 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:46 crc kubenswrapper[5024]: I1007 12:29:46.354925 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podStartSLOduration=103.35490571 podStartE2EDuration="1m43.35490571s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:29:45.36005297 +0000 UTC m=+123.435839808" watchObservedRunningTime="2025-10-07 12:29:46.35490571 +0000 UTC m=+124.430692548" Oct 07 12:29:46 crc kubenswrapper[5024]: I1007 12:29:46.355883 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gtmmn"] Oct 07 12:29:46 crc kubenswrapper[5024]: I1007 12:29:46.355999 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:46 crc kubenswrapper[5024]: E1007 12:29:46.356095 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:46 crc kubenswrapper[5024]: I1007 12:29:46.751443 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:46 crc kubenswrapper[5024]: I1007 12:29:46.751476 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:46 crc kubenswrapper[5024]: E1007 12:29:46.751914 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:46 crc kubenswrapper[5024]: E1007 12:29:46.752028 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:47 crc kubenswrapper[5024]: I1007 12:29:47.751118 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:47 crc kubenswrapper[5024]: I1007 12:29:47.751167 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:47 crc kubenswrapper[5024]: E1007 12:29:47.751354 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:47 crc kubenswrapper[5024]: E1007 12:29:47.751430 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:47 crc kubenswrapper[5024]: E1007 12:29:47.866420 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 07 12:29:48 crc kubenswrapper[5024]: I1007 12:29:48.750829 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:48 crc kubenswrapper[5024]: E1007 12:29:48.750965 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:48 crc kubenswrapper[5024]: I1007 12:29:48.750829 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:48 crc kubenswrapper[5024]: E1007 12:29:48.751260 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:49 crc kubenswrapper[5024]: I1007 12:29:49.750615 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:49 crc kubenswrapper[5024]: I1007 12:29:49.750656 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:49 crc kubenswrapper[5024]: E1007 12:29:49.750737 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:49 crc kubenswrapper[5024]: E1007 12:29:49.750819 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:50 crc kubenswrapper[5024]: I1007 12:29:50.751196 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:50 crc kubenswrapper[5024]: I1007 12:29:50.751277 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:50 crc kubenswrapper[5024]: E1007 12:29:50.751341 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:50 crc kubenswrapper[5024]: E1007 12:29:50.751449 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:51 crc kubenswrapper[5024]: I1007 12:29:51.750686 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:51 crc kubenswrapper[5024]: I1007 12:29:51.750799 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:51 crc kubenswrapper[5024]: E1007 12:29:51.750995 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:51 crc kubenswrapper[5024]: I1007 12:29:51.751102 5024 scope.go:117] "RemoveContainer" containerID="79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe" Oct 07 12:29:51 crc kubenswrapper[5024]: E1007 12:29:51.751527 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:52 crc kubenswrapper[5024]: I1007 12:29:52.349869 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/1.log" Oct 07 12:29:52 crc kubenswrapper[5024]: I1007 12:29:52.349932 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerStarted","Data":"fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901"} Oct 07 12:29:52 crc kubenswrapper[5024]: I1007 12:29:52.750495 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:52 crc kubenswrapper[5024]: I1007 12:29:52.750621 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:52 crc kubenswrapper[5024]: E1007 12:29:52.751493 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:52 crc kubenswrapper[5024]: E1007 12:29:52.751539 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:52 crc kubenswrapper[5024]: E1007 12:29:52.867475 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 07 12:29:53 crc kubenswrapper[5024]: I1007 12:29:53.751253 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:53 crc kubenswrapper[5024]: E1007 12:29:53.751447 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:53 crc kubenswrapper[5024]: I1007 12:29:53.751253 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:53 crc kubenswrapper[5024]: E1007 12:29:53.752000 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:54 crc kubenswrapper[5024]: I1007 12:29:54.751469 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:54 crc kubenswrapper[5024]: I1007 12:29:54.751625 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:54 crc kubenswrapper[5024]: E1007 12:29:54.751662 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:54 crc kubenswrapper[5024]: E1007 12:29:54.751777 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:55 crc kubenswrapper[5024]: I1007 12:29:55.751502 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:55 crc kubenswrapper[5024]: I1007 12:29:55.751555 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:55 crc kubenswrapper[5024]: E1007 12:29:55.751634 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:55 crc kubenswrapper[5024]: E1007 12:29:55.751706 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:56 crc kubenswrapper[5024]: I1007 12:29:56.751361 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:56 crc kubenswrapper[5024]: I1007 12:29:56.751425 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:56 crc kubenswrapper[5024]: E1007 12:29:56.751523 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 07 12:29:56 crc kubenswrapper[5024]: E1007 12:29:56.751648 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 07 12:29:57 crc kubenswrapper[5024]: I1007 12:29:57.750678 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:57 crc kubenswrapper[5024]: I1007 12:29:57.750728 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:57 crc kubenswrapper[5024]: E1007 12:29:57.750842 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 07 12:29:57 crc kubenswrapper[5024]: E1007 12:29:57.751084 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gtmmn" podUID="ac027a0c-8461-4ea2-9a6e-40b4af6721b9" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.750737 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.750841 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.753666 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.754735 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.755009 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 07 12:29:58 crc kubenswrapper[5024]: I1007 12:29:58.757185 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 07 12:29:59 crc kubenswrapper[5024]: I1007 12:29:59.751395 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:29:59 crc kubenswrapper[5024]: I1007 12:29:59.751401 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:29:59 crc kubenswrapper[5024]: I1007 12:29:59.754342 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 07 12:29:59 crc kubenswrapper[5024]: I1007 12:29:59.754495 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.290599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.327563 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wh6d2"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.327962 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.330366 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.330624 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.330789 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.331141 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.331437 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.332218 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.332600 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.332694 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.332750 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.333595 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.334124 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.334807 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335189 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335199 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335237 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335210 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335395 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335903 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335965 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.335906 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.336067 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.336196 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.336230 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.336936 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdgps"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.337175 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.337187 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.337212 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.337429 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.338509 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.338867 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.339064 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.348936 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.349538 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wp2gk"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.350195 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.352042 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.352683 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.353008 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.355331 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.358310 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.360936 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.361225 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.365892 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9rwzm"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366486 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366972 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366518 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367286 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367451 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366714 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367565 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366712 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366753 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367665 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366789 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366792 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367827 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367887 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.368184 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.368727 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366825 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366829 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366896 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366897 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366921 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366938 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366952 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.366985 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367007 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.367026 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.369247 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.370493 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.370710 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.371710 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hflv2"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.372360 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.373297 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.375097 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.376843 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xtwlc"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.377347 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.379780 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.382336 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.384455 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.384743 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.384952 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385190 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385452 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385553 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385663 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385739 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385857 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.385935 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386480 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386644 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386789 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386791 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386905 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.386967 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387066 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387200 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387560 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387626 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387649 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387733 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.387832 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388274 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388479 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388556 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388574 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388647 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389210 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388665 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388684 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389419 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388758 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.388888 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389561 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389275 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389701 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389760 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389874 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389959 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.389988 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390075 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390092 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390208 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390304 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390451 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390544 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390573 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.390936 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.391970 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.392108 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.395428 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.396203 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.396282 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.407953 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv9ml"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.408635 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.409074 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.409635 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.409959 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412668 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-serving-cert\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412690 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit-dir\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412717 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-client\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412756 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzv9g\" (UniqueName: \"kubernetes.io/projected/fd6f037f-c253-488b-9386-19aa7fab7fec-kube-api-access-tzv9g\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412778 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412800 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412819 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412941 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.412970 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-client\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413013 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjjc\" (UniqueName: \"kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-dir\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413230 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413318 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413381 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413411 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlgr\" (UniqueName: \"kubernetes.io/projected/f2327164-7135-4dd2-bc42-dd68cefdb772-kube-api-access-rrlgr\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413444 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx96j\" (UniqueName: \"kubernetes.io/projected/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-kube-api-access-dx96j\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413503 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413619 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413678 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413716 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413755 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413781 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2327164-7135-4dd2-bc42-dd68cefdb772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413806 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413849 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413875 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413900 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413927 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413947 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-encryption-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-node-pullsecrets\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.413999 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-serving-cert\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414031 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg248\" (UniqueName: \"kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414113 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qpn\" (UniqueName: \"kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414238 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414268 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414286 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-encryption-config\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414313 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414367 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2327164-7135-4dd2-bc42-dd68cefdb772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414402 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-image-import-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414434 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414479 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnt6\" (UniqueName: \"kubernetes.io/projected/563d2566-d201-4094-9f4d-20a167bfd0f7-kube-api-access-nwnt6\") pod \"downloads-7954f5f757-wh6d2\" (UID: \"563d2566-d201-4094-9f4d-20a167bfd0f7\") " pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-policies\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414616 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.414649 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.417630 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.418323 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.419359 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.419574 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.449482 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.450357 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.456524 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.458942 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.459603 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-l4pwr"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.460111 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.460510 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.464194 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.465156 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.465780 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.469050 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.469660 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zh82b"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.470097 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.470459 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.470768 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.475983 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.476947 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.480229 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.482717 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.483197 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.484054 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.484099 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.484488 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-frvxs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.484655 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.484817 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.485460 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.489925 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.490688 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.490958 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.491872 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.492375 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.492914 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.492976 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.494696 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-whw6l"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.495238 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.496449 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.497267 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.497522 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.498923 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.500227 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wh6d2"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.500938 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m97mg"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.501901 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.502273 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wp2gk"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.503060 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.504105 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.506347 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdgps"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.508249 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.508934 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.509868 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.515361 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xtwlc"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516313 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-policies\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516348 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516409 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516449 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d4add5-c462-4ccc-8c65-8efd72b99637-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516480 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-service-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516505 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qp9\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-kube-api-access-s8qp9\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516532 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516558 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf226d94-0733-4205-9790-7590b441dac9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516587 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvjd\" (UniqueName: \"kubernetes.io/projected/6bb2be95-3593-4045-8dca-353189946a2f-kube-api-access-cvvjd\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516635 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516667 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xnrc\" (UniqueName: \"kubernetes.io/projected/306952da-e05e-468a-8e44-5cc64940f7f6-kube-api-access-5xnrc\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-serving-cert\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516728 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit-dir\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516778 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-client\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516808 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516839 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275kw\" (UniqueName: \"kubernetes.io/projected/c6f06beb-72aa-499a-a760-d36404bca577-kube-api-access-275kw\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516869 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6047f06e-4b55-4a39-be9c-6341c8cf7082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516895 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzv9g\" (UniqueName: \"kubernetes.io/projected/fd6f037f-c253-488b-9386-19aa7fab7fec-kube-api-access-tzv9g\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516926 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-config\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.516983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-config\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517035 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c479290-3870-4f83-b3e6-a86e91bda22e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517061 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517094 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517115 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-trusted-ca\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517161 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517224 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-client\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517251 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517292 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjjc\" (UniqueName: \"kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517322 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-dir\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517347 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517376 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517404 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-config\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517432 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abe2fa8-3512-46b2-a738-682a833ae488-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517459 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306952da-e05e-468a-8e44-5cc64940f7f6-tmpfs\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52kd\" (UniqueName: \"kubernetes.io/projected/1c479290-3870-4f83-b3e6-a86e91bda22e-kube-api-access-k52kd\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517529 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517554 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlgr\" (UniqueName: \"kubernetes.io/projected/f2327164-7135-4dd2-bc42-dd68cefdb772-kube-api-access-rrlgr\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517581 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx96j\" (UniqueName: \"kubernetes.io/projected/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-kube-api-access-dx96j\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517642 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6f06beb-72aa-499a-a760-d36404bca577-serving-cert\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517699 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grv86\" (UniqueName: \"kubernetes.io/projected/a9bbb41d-c515-43d3-8e35-a73bed39e840-kube-api-access-grv86\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517724 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-config\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517754 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517783 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517810 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517866 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517894 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-serving-cert\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517922 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdf4\" (UniqueName: \"kubernetes.io/projected/de462421-9ecd-4bd7-9b00-f054da067ca6-kube-api-access-4pdf4\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517949 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.517974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24sdj\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-kube-api-access-24sdj\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518003 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/6047f06e-4b55-4a39-be9c-6341c8cf7082-kube-api-access-jmn6d\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518029 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518058 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518082 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6b7685-5713-4da2-a9bf-bb61144e3561-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518109 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4add5-c462-4ccc-8c65-8efd72b99637-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518152 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6g2\" (UniqueName: \"kubernetes.io/projected/cfecdd47-3bdf-4f99-b34b-dbe793b59717-kube-api-access-sx6g2\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518184 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv42\" (UniqueName: \"kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518207 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-srv-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518240 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518269 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2327164-7135-4dd2-bc42-dd68cefdb772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-webhook-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518372 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518428 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8wx\" (UniqueName: \"kubernetes.io/projected/bc6b7685-5713-4da2-a9bf-bb61144e3561-kube-api-access-wf8wx\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518456 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh6n\" (UniqueName: \"kubernetes.io/projected/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-kube-api-access-vmh6n\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518487 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518547 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518569 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-encryption-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518598 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-auth-proxy-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-node-pullsecrets\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518679 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-config\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518711 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-serving-cert\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518746 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518798 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-machine-approver-tls\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518825 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kj6\" (UniqueName: \"kubernetes.io/projected/0abe2fa8-3512-46b2-a738-682a833ae488-kube-api-access-r5kj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg248\" (UniqueName: \"kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518887 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfecdd47-3bdf-4f99-b34b-dbe793b59717-serving-cert\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518916 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.518971 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qpn\" (UniqueName: \"kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519030 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519053 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519081 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17854bb6-7bec-4972-92a8-299702642b45-metrics-tls\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519108 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb2be95-3593-4045-8dca-353189946a2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.519141 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.521523 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-encryption-config\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.522369 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-serving-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.522787 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-node-pullsecrets\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.524326 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-dir\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.524515 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit-dir\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.525476 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.525611 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.526030 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-audit\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.526406 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.526763 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.527248 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.527505 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.527509 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.527845 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6f037f-c253-488b-9386-19aa7fab7fec-audit-policies\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.528348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.527459 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.528616 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.529528 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9rwzm"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.530114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.530649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.530682 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.530707 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-service-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531268 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531373 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2327164-7135-4dd2-bc42-dd68cefdb772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531401 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-image-import-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531434 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6b7685-5713-4da2-a9bf-bb61144e3561-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531496 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17854bb6-7bec-4972-92a8-299702642b45-trusted-ca\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531729 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531783 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-images\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.531858 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-client\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532168 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532187 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2327164-7135-4dd2-bc42-dd68cefdb772-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532649 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-image-import-ca\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532720 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532748 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-serving-cert\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532586 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532824 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f596\" (UniqueName: \"kubernetes.io/projected/bf226d94-0733-4205-9790-7590b441dac9-kube-api-access-5f596\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532855 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6bb2be95-3593-4045-8dca-353189946a2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532928 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnt6\" (UniqueName: \"kubernetes.io/projected/563d2566-d201-4094-9f4d-20a167bfd0f7-kube-api-access-nwnt6\") pod \"downloads-7954f5f757-wh6d2\" (UID: \"563d2566-d201-4094-9f4d-20a167bfd0f7\") " pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.532944 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.533084 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-encryption-config\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.533424 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.533431 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.533562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.534069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.534197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.534426 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.534964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.535565 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.537384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-etcd-client\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.537384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2327164-7135-4dd2-bc42-dd68cefdb772-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.537446 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.537814 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.538552 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-serving-cert\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.538855 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-encryption-config\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.539056 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.540214 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zh82b"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.544477 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.545608 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.548582 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.549966 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.550002 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv9ml"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.550112 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.551243 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.551963 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.553032 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.555219 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.555290 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.555717 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.556226 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.559183 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-frvxs"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.559288 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hflv2"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.559360 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-whw6l"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.562598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6f037f-c253-488b-9386-19aa7fab7fec-etcd-client\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.562657 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.562726 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.564170 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.565264 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.570076 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wskr8"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.570608 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.571379 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9qlzn"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.572757 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.574075 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.574099 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.574180 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.583577 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.585222 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wskr8"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.586717 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9qlzn"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.587681 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.588998 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cf5dj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.589694 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.590063 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.592955 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cf5dj"] Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.610698 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.629934 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633566 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633612 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-config\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633666 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abe2fa8-3512-46b2-a738-682a833ae488-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6f06beb-72aa-499a-a760-d36404bca577-serving-cert\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633743 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306952da-e05e-468a-8e44-5cc64940f7f6-tmpfs\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633761 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k52kd\" (UniqueName: \"kubernetes.io/projected/1c479290-3870-4f83-b3e6-a86e91bda22e-kube-api-access-k52kd\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633799 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633826 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grv86\" (UniqueName: \"kubernetes.io/projected/a9bbb41d-c515-43d3-8e35-a73bed39e840-kube-api-access-grv86\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633853 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-config\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633936 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633954 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-serving-cert\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.633969 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pdf4\" (UniqueName: \"kubernetes.io/projected/de462421-9ecd-4bd7-9b00-f054da067ca6-kube-api-access-4pdf4\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634019 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24sdj\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-kube-api-access-24sdj\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634035 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/6047f06e-4b55-4a39-be9c-6341c8cf7082-kube-api-access-jmn6d\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634085 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6b7685-5713-4da2-a9bf-bb61144e3561-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4add5-c462-4ccc-8c65-8efd72b99637-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6g2\" (UniqueName: \"kubernetes.io/projected/cfecdd47-3bdf-4f99-b34b-dbe793b59717-kube-api-access-sx6g2\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634159 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbv42\" (UniqueName: \"kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-srv-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-webhook-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8wx\" (UniqueName: \"kubernetes.io/projected/bc6b7685-5713-4da2-a9bf-bb61144e3561-kube-api-access-wf8wx\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634262 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh6n\" (UniqueName: \"kubernetes.io/projected/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-kube-api-access-vmh6n\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634320 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634342 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-config\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634343 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/306952da-e05e-468a-8e44-5cc64940f7f6-tmpfs\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-auth-proxy-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634397 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634416 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634430 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-machine-approver-tls\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634446 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kj6\" (UniqueName: \"kubernetes.io/projected/0abe2fa8-3512-46b2-a738-682a833ae488-kube-api-access-r5kj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634496 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfecdd47-3bdf-4f99-b34b-dbe793b59717-serving-cert\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634525 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634597 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634647 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634664 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-service-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634681 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17854bb6-7bec-4972-92a8-299702642b45-metrics-tls\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634736 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb2be95-3593-4045-8dca-353189946a2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6b7685-5713-4da2-a9bf-bb61144e3561-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17854bb6-7bec-4972-92a8-299702642b45-trusted-ca\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634817 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634841 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634851 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-images\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634892 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-client\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634910 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f596\" (UniqueName: \"kubernetes.io/projected/bf226d94-0733-4205-9790-7590b441dac9-kube-api-access-5f596\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6bb2be95-3593-4045-8dca-353189946a2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.634983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d4add5-c462-4ccc-8c65-8efd72b99637-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635009 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-service-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635036 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qp9\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-kube-api-access-s8qp9\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635089 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635110 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf226d94-0733-4205-9790-7590b441dac9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635167 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvjd\" (UniqueName: \"kubernetes.io/projected/6bb2be95-3593-4045-8dca-353189946a2f-kube-api-access-cvvjd\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635212 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xnrc\" (UniqueName: \"kubernetes.io/projected/306952da-e05e-468a-8e44-5cc64940f7f6-kube-api-access-5xnrc\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275kw\" (UniqueName: \"kubernetes.io/projected/c6f06beb-72aa-499a-a760-d36404bca577-kube-api-access-275kw\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635274 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6047f06e-4b55-4a39-be9c-6341c8cf7082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-config\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635349 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-config\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635370 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c479290-3870-4f83-b3e6-a86e91bda22e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635388 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-trusted-ca\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635422 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.635756 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.636039 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.636290 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-config\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.636635 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.636873 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-auth-proxy-config\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.637373 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-service-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.637379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6bb2be95-3593-4045-8dca-353189946a2f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.637386 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abe2fa8-3512-46b2-a738-682a833ae488-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.638331 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.638746 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-images\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.639274 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.639678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-webhook-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.639872 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6b7685-5713-4da2-a9bf-bb61144e3561-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.639884 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6b7685-5713-4da2-a9bf-bb61144e3561-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.640025 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-config\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.640271 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6f06beb-72aa-499a-a760-d36404bca577-trusted-ca\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.640460 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfecdd47-3bdf-4f99-b34b-dbe793b59717-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.640638 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047f06e-4b55-4a39-be9c-6341c8cf7082-config\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.640879 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6047f06e-4b55-4a39-be9c-6341c8cf7082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.641188 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-machine-approver-tls\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.641198 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfecdd47-3bdf-4f99-b34b-dbe793b59717-serving-cert\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.641399 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17854bb6-7bec-4972-92a8-299702642b45-trusted-ca\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.641538 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.642277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb2be95-3593-4045-8dca-353189946a2f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.642539 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf226d94-0733-4205-9790-7590b441dac9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.642677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17854bb6-7bec-4972-92a8-299702642b45-metrics-tls\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.643143 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6f06beb-72aa-499a-a760-d36404bca577-serving-cert\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.643172 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306952da-e05e-468a-8e44-5cc64940f7f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.649809 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.670226 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.689679 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.709861 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.720498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c479290-3870-4f83-b3e6-a86e91bda22e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.735756 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.739569 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4add5-c462-4ccc-8c65-8efd72b99637-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.750282 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.755279 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-config\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.770508 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.777900 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-serving-cert\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.789940 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.810860 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.822141 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d4add5-c462-4ccc-8c65-8efd72b99637-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.830818 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.837329 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-service-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.849933 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.860519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-client\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.870460 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.876832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/de462421-9ecd-4bd7-9b00-f054da067ca6-etcd-ca\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.890645 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.910190 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.930644 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.970271 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 07 12:30:01 crc kubenswrapper[5024]: I1007 12:30:01.990129 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.010112 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.019079 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.030668 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.038753 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-config\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.050124 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.059057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.070439 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.079190 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bbb41d-c515-43d3-8e35-a73bed39e840-srv-cert\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.090756 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.110108 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.130053 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.150652 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.169786 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.189766 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.210547 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.230213 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.250046 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.270575 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.291528 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.309915 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.329864 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.350596 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.371256 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.390075 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.410780 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.430864 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.450447 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.470310 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.489227 5024 request.go:700] Waited for 1.005674371s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.490608 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.510489 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.530314 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.550639 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.570102 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.590904 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.612257 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.630893 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.651385 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.670811 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.691272 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.710251 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.732368 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.752436 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.770654 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.790655 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.811088 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.831006 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.850131 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.869992 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.903011 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.911164 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.929985 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.950289 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.969809 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 07 12:30:02 crc kubenswrapper[5024]: I1007 12:30:02.989859 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.010010 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.030129 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.050790 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.071065 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.091082 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.110366 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.131380 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.151534 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.184573 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg248\" (UniqueName: \"kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248\") pod \"oauth-openshift-558db77b4-kw42v\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.204400 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qpn\" (UniqueName: \"kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn\") pod \"route-controller-manager-6576b87f9c-8vpl5\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.225240 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjjc\" (UniqueName: \"kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc\") pod \"console-f9d7485db-9t72d\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.243257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlgr\" (UniqueName: \"kubernetes.io/projected/f2327164-7135-4dd2-bc42-dd68cefdb772-kube-api-access-rrlgr\") pod \"openshift-apiserver-operator-796bbdcf4f-s266g\" (UID: \"f2327164-7135-4dd2-bc42-dd68cefdb772\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.258641 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.264501 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx96j\" (UniqueName: \"kubernetes.io/projected/a55e7ecf-f2fa-4e64-af0c-c7a0651ded29-kube-api-access-dx96j\") pod \"apiserver-76f77b778f-bdgps\" (UID: \"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29\") " pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.267266 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.279489 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.283102 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzv9g\" (UniqueName: \"kubernetes.io/projected/fd6f037f-c253-488b-9386-19aa7fab7fec-kube-api-access-tzv9g\") pod \"apiserver-7bbb656c7d-ml5hz\" (UID: \"fd6f037f-c253-488b-9386-19aa7fab7fec\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.290098 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.303269 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnt6\" (UniqueName: \"kubernetes.io/projected/563d2566-d201-4094-9f4d-20a167bfd0f7-kube-api-access-nwnt6\") pod \"downloads-7954f5f757-wh6d2\" (UID: \"563d2566-d201-4094-9f4d-20a167bfd0f7\") " pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.330393 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.349938 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.369474 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.389980 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.410212 5024 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.430959 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.450051 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.460442 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.472124 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.484773 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.491787 5024 request.go:700] Waited for 1.901790506s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.493618 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.511228 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.525706 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.545876 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c3167bb-dbe2-42bf-8693-cf28b7a9a28c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fdw8z\" (UID: \"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.567476 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.594442 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k52kd\" (UniqueName: \"kubernetes.io/projected/1c479290-3870-4f83-b3e6-a86e91bda22e-kube-api-access-k52kd\") pod \"package-server-manager-789f6589d5-6t8l5\" (UID: \"1c479290-3870-4f83-b3e6-a86e91bda22e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.605384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pdf4\" (UniqueName: \"kubernetes.io/projected/de462421-9ecd-4bd7-9b00-f054da067ca6-kube-api-access-4pdf4\") pod \"etcd-operator-b45778765-nv9ml\" (UID: \"de462421-9ecd-4bd7-9b00-f054da067ca6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.626465 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.628911 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.634983 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grv86\" (UniqueName: \"kubernetes.io/projected/a9bbb41d-c515-43d3-8e35-a73bed39e840-kube-api-access-grv86\") pod \"olm-operator-6b444d44fb-jrgc5\" (UID: \"a9bbb41d-c515-43d3-8e35-a73bed39e840\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.650808 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbv42\" (UniqueName: \"kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42\") pod \"controller-manager-879f6c89f-25rxs\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.671404 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.686482 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7300a892-62e1-4ec9-b0c0-83d0aaf90bd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qr4f5\" (UID: \"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.705607 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24sdj\" (UniqueName: \"kubernetes.io/projected/d1d4add5-c462-4ccc-8c65-8efd72b99637-kube-api-access-24sdj\") pod \"cluster-image-registry-operator-dc59b4c8b-scskj\" (UID: \"d1d4add5-c462-4ccc-8c65-8efd72b99637\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.724195 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8wx\" (UniqueName: \"kubernetes.io/projected/bc6b7685-5713-4da2-a9bf-bb61144e3561-kube-api-access-wf8wx\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrdxk\" (UID: \"bc6b7685-5713-4da2-a9bf-bb61144e3561\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.729037 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.744309 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/6047f06e-4b55-4a39-be9c-6341c8cf7082-kube-api-access-jmn6d\") pod \"machine-api-operator-5694c8668f-wp2gk\" (UID: \"6047f06e-4b55-4a39-be9c-6341c8cf7082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.765014 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh6n\" (UniqueName: \"kubernetes.io/projected/24a251d4-749c-4d2d-9fb5-8bd2330d7b35-kube-api-access-vmh6n\") pod \"machine-approver-56656f9798-hdr5j\" (UID: \"24a251d4-749c-4d2d-9fb5-8bd2330d7b35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.765279 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.772991 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.779535 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.787565 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.787990 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275kw\" (UniqueName: \"kubernetes.io/projected/c6f06beb-72aa-499a-a760-d36404bca577-kube-api-access-275kw\") pod \"console-operator-58897d9998-xtwlc\" (UID: \"c6f06beb-72aa-499a-a760-d36404bca577\") " pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.792540 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.808324 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qp9\" (UniqueName: \"kubernetes.io/projected/17854bb6-7bec-4972-92a8-299702642b45-kube-api-access-s8qp9\") pod \"ingress-operator-5b745b69d9-vt8qv\" (UID: \"17854bb6-7bec-4972-92a8-299702642b45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.822329 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wh6d2"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.823452 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6g2\" (UniqueName: \"kubernetes.io/projected/cfecdd47-3bdf-4f99-b34b-dbe793b59717-kube-api-access-sx6g2\") pod \"authentication-operator-69f744f599-9rwzm\" (UID: \"cfecdd47-3bdf-4f99-b34b-dbe793b59717\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.845291 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvjd\" (UniqueName: \"kubernetes.io/projected/6bb2be95-3593-4045-8dca-353189946a2f-kube-api-access-cvvjd\") pod \"openshift-config-operator-7777fb866f-hflv2\" (UID: \"6bb2be95-3593-4045-8dca-353189946a2f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.845791 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.865771 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xnrc\" (UniqueName: \"kubernetes.io/projected/306952da-e05e-468a-8e44-5cc64940f7f6-kube-api-access-5xnrc\") pod \"packageserver-d55dfcdfc-gr9gr\" (UID: \"306952da-e05e-468a-8e44-5cc64940f7f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.885672 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f596\" (UniqueName: \"kubernetes.io/projected/bf226d94-0733-4205-9790-7590b441dac9-kube-api-access-5f596\") pod \"cluster-samples-operator-665b6dd947-pxvcj\" (UID: \"bf226d94-0733-4205-9790-7590b441dac9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.901437 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.904556 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kj6\" (UniqueName: \"kubernetes.io/projected/0abe2fa8-3512-46b2-a738-682a833ae488-kube-api-access-r5kj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fw2t\" (UID: \"0abe2fa8-3512-46b2-a738-682a833ae488\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.908232 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.932292 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.932706 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.944675 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.955183 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bdgps"] Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963657 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963713 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8cd\" (UniqueName: \"kubernetes.io/projected/dacf4124-e72b-4d0f-8e4b-f43b083275b8-kube-api-access-nr8cd\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963797 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d52d4285-2b77-4a88-8f02-add6e0de37ff-proxy-tls\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963815 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-default-certificate\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963831 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963858 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5da4f2d-db84-4a91-8f7e-7843f4062df6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963884 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-service-ca-bundle\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.963954 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqx42\" (UniqueName: \"kubernetes.io/projected/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-kube-api-access-tqx42\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964337 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964376 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:03 crc kubenswrapper[5024]: E1007 12:30:03.964476 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.464460765 +0000 UTC m=+142.540247613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964701 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964736 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-cabundle\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964752 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xg62\" (UniqueName: \"kubernetes.io/projected/1b35c5bc-b3c2-4109-928a-3d1898fdca29-kube-api-access-7xg62\") pod \"migrator-59844c95c7-2p5j7\" (UID: \"1b35c5bc-b3c2-4109-928a-3d1898fdca29\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964966 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-key\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.964994 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrbw\" (UniqueName: \"kubernetes.io/projected/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-kube-api-access-rjrbw\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965086 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbpw\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965205 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5da4f2d-db84-4a91-8f7e-7843f4062df6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965224 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06b21306-4b72-47d2-b814-9b4b333295bb-metrics-tls\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-proxy-tls\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965349 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-images\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.965840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6jc\" (UniqueName: \"kubernetes.io/projected/f1234432-773a-4c0b-99db-d37df59ec9b6-kube-api-access-rf6jc\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.966384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drc7k\" (UniqueName: \"kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.966671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-certs\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.967229 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.967499 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-srv-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.967597 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3096e7a7-c9b4-47ab-8336-b87e49e4521b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.967710 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkhh\" (UniqueName: \"kubernetes.io/projected/d52d4285-2b77-4a88-8f02-add6e0de37ff-kube-api-access-lgkhh\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.968090 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-node-bootstrap-token\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.968185 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnw9n\" (UniqueName: \"kubernetes.io/projected/06b21306-4b72-47d2-b814-9b4b333295bb-kube-api-access-nnw9n\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.968536 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-metrics-certs\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.968720 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.969235 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.969496 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970001 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf4124-e72b-4d0f-8e4b-f43b083275b8-serving-cert\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970338 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970600 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf4124-e72b-4d0f-8e4b-f43b083275b8-config\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-stats-auth\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970646 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nztt6\" (UniqueName: \"kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970692 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-config\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970782 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg5d\" (UniqueName: \"kubernetes.io/projected/a9bd5896-2df2-4367-be33-9891f0bc67aa-kube-api-access-5rg5d\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970808 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfdf\" (UniqueName: \"kubernetes.io/projected/3096e7a7-c9b4-47ab-8336-b87e49e4521b-kube-api-access-5dfdf\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970827 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pbm5\" (UniqueName: \"kubernetes.io/projected/7b42db20-9e0c-4afa-b850-4f0e485b17e8-kube-api-access-6pbm5\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970881 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6qw\" (UniqueName: \"kubernetes.io/projected/c5da4f2d-db84-4a91-8f7e-7843f4062df6-kube-api-access-vk6qw\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.970925 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.973100 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.980645 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.989024 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" Oct 07 12:30:03 crc kubenswrapper[5024]: I1007 12:30:03.997360 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.003295 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.010286 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.017797 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.035544 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071648 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.071830 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.571791248 +0000 UTC m=+142.647578086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071894 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfdf\" (UniqueName: \"kubernetes.io/projected/3096e7a7-c9b4-47ab-8336-b87e49e4521b-kube-api-access-5dfdf\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czg2n\" (UniqueName: \"kubernetes.io/projected/c7b3c652-baa1-4549-9b0b-974f430b56dd-kube-api-access-czg2n\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071946 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pbm5\" (UniqueName: \"kubernetes.io/projected/7b42db20-9e0c-4afa-b850-4f0e485b17e8-kube-api-access-6pbm5\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071966 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.071984 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072019 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6qw\" (UniqueName: \"kubernetes.io/projected/c5da4f2d-db84-4a91-8f7e-7843f4062df6-kube-api-access-vk6qw\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072057 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072083 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072162 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-registration-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072218 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d52d4285-2b77-4a88-8f02-add6e0de37ff-proxy-tls\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072242 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-default-certificate\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8cd\" (UniqueName: \"kubernetes.io/projected/dacf4124-e72b-4d0f-8e4b-f43b083275b8-kube-api-access-nr8cd\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5da4f2d-db84-4a91-8f7e-7843f4062df6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072325 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-service-ca-bundle\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072354 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqx42\" (UniqueName: \"kubernetes.io/projected/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-kube-api-access-tqx42\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072375 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c495c046-5e10-4020-a686-2834edbe289e-cert\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072409 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072447 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-plugins-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072468 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-cabundle\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xg62\" (UniqueName: \"kubernetes.io/projected/1b35c5bc-b3c2-4109-928a-3d1898fdca29-kube-api-access-7xg62\") pod \"migrator-59844c95c7-2p5j7\" (UID: \"1b35c5bc-b3c2-4109-928a-3d1898fdca29\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-key\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072516 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrbw\" (UniqueName: \"kubernetes.io/projected/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-kube-api-access-rjrbw\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hbpw\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072555 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5da4f2d-db84-4a91-8f7e-7843f4062df6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072569 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06b21306-4b72-47d2-b814-9b4b333295bb-metrics-tls\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-proxy-tls\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-images\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6jc\" (UniqueName: \"kubernetes.io/projected/f1234432-773a-4c0b-99db-d37df59ec9b6-kube-api-access-rf6jc\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072667 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drc7k\" (UniqueName: \"kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-certs\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072701 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkhh\" (UniqueName: \"kubernetes.io/projected/d52d4285-2b77-4a88-8f02-add6e0de37ff-kube-api-access-lgkhh\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072739 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-srv-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072753 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3096e7a7-c9b4-47ab-8336-b87e49e4521b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-node-bootstrap-token\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072792 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnw9n\" (UniqueName: \"kubernetes.io/projected/06b21306-4b72-47d2-b814-9b4b333295bb-kube-api-access-nnw9n\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072811 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97837bed-3f7c-4bf8-be43-550ea11c0a98-metrics-tls\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072838 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-metrics-certs\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072880 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072895 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072934 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf4124-e72b-4d0f-8e4b-f43b083275b8-serving-cert\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072957 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcx7\" (UniqueName: \"kubernetes.io/projected/c495c046-5e10-4020-a686-2834edbe289e-kube-api-access-7jcx7\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.072981 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbdd\" (UniqueName: \"kubernetes.io/projected/97837bed-3f7c-4bf8-be43-550ea11c0a98-kube-api-access-mwbdd\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073008 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073033 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-socket-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073058 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-csi-data-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073086 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf4124-e72b-4d0f-8e4b-f43b083275b8-config\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073108 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97837bed-3f7c-4bf8-be43-550ea11c0a98-config-volume\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073137 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-stats-auth\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073176 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nztt6\" (UniqueName: \"kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073197 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-config\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073223 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073243 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-mountpoint-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg5d\" (UniqueName: \"kubernetes.io/projected/a9bd5896-2df2-4367-be33-9891f0bc67aa-kube-api-access-5rg5d\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.073748 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.074193 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5da4f2d-db84-4a91-8f7e-7843f4062df6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.074240 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.074433 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-cabundle\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.075233 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-service-ca-bundle\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.075503 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.075608 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.075796 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.075874 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.076568 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d52d4285-2b77-4a88-8f02-add6e0de37ff-proxy-tls\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.076854 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-config\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.077029 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-default-certificate\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.077242 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf4124-e72b-4d0f-8e4b-f43b083275b8-config\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.077572 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.077700 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.078081 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-stats-auth\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.078128 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d52d4285-2b77-4a88-8f02-add6e0de37ff-images\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.078483 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-metrics-certs\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.078519 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.578500374 +0000 UTC m=+142.654287212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.078711 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5da4f2d-db84-4a91-8f7e-7843f4062df6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.079475 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf4124-e72b-4d0f-8e4b-f43b083275b8-serving-cert\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.079717 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b42db20-9e0c-4afa-b850-4f0e485b17e8-signing-key\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.080273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.080680 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.080710 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-proxy-tls\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.081076 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3096e7a7-c9b4-47ab-8336-b87e49e4521b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.081364 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.081651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06b21306-4b72-47d2-b814-9b4b333295bb-metrics-tls\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.082036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.082197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bd5896-2df2-4367-be33-9891f0bc67aa-srv-cert\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.082235 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.127484 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hbpw\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.165058 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfdf\" (UniqueName: \"kubernetes.io/projected/3096e7a7-c9b4-47ab-8336-b87e49e4521b-kube-api-access-5dfdf\") pod \"multus-admission-controller-857f4d67dd-whw6l\" (UID: \"3096e7a7-c9b4-47ab-8336-b87e49e4521b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.174728 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.174913 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-mountpoint-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.174953 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czg2n\" (UniqueName: \"kubernetes.io/projected/c7b3c652-baa1-4549-9b0b-974f430b56dd-kube-api-access-czg2n\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-registration-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175045 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c495c046-5e10-4020-a686-2834edbe289e-cert\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-plugins-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175198 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97837bed-3f7c-4bf8-be43-550ea11c0a98-metrics-tls\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175253 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbdd\" (UniqueName: \"kubernetes.io/projected/97837bed-3f7c-4bf8-be43-550ea11c0a98-kube-api-access-mwbdd\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175282 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcx7\" (UniqueName: \"kubernetes.io/projected/c495c046-5e10-4020-a686-2834edbe289e-kube-api-access-7jcx7\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175312 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-socket-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-csi-data-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175356 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97837bed-3f7c-4bf8-be43-550ea11c0a98-config-volume\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.175797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-plugins-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.175990 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.675964525 +0000 UTC m=+142.751751433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.176110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-mountpoint-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.176170 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97837bed-3f7c-4bf8-be43-550ea11c0a98-config-volume\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.176244 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-csi-data-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.176374 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-registration-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.176314 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c652-baa1-4549-9b0b-974f430b56dd-socket-dir\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.178925 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pbm5\" (UniqueName: \"kubernetes.io/projected/7b42db20-9e0c-4afa-b850-4f0e485b17e8-kube-api-access-6pbm5\") pod \"service-ca-9c57cc56f-zh82b\" (UID: \"7b42db20-9e0c-4afa-b850-4f0e485b17e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.179540 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c495c046-5e10-4020-a686-2834edbe289e-cert\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.182662 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97837bed-3f7c-4bf8-be43-550ea11c0a98-metrics-tls\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.195786 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6qw\" (UniqueName: \"kubernetes.io/projected/c5da4f2d-db84-4a91-8f7e-7843f4062df6-kube-api-access-vk6qw\") pod \"kube-storage-version-migrator-operator-b67b599dd-dvg5m\" (UID: \"c5da4f2d-db84-4a91-8f7e-7843f4062df6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.200392 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.207696 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8cd\" (UniqueName: \"kubernetes.io/projected/dacf4124-e72b-4d0f-8e4b-f43b083275b8-kube-api-access-nr8cd\") pod \"service-ca-operator-777779d784-nhfn7\" (UID: \"dacf4124-e72b-4d0f-8e4b-f43b083275b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.223949 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqx42\" (UniqueName: \"kubernetes.io/projected/e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1-kube-api-access-tqx42\") pod \"machine-config-controller-84d6567774-fxzgs\" (UID: \"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.244467 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkhh\" (UniqueName: \"kubernetes.io/projected/d52d4285-2b77-4a88-8f02-add6e0de37ff-kube-api-access-lgkhh\") pod \"machine-config-operator-74547568cd-p9sts\" (UID: \"d52d4285-2b77-4a88-8f02-add6e0de37ff\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.264031 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnw9n\" (UniqueName: \"kubernetes.io/projected/06b21306-4b72-47d2-b814-9b4b333295bb-kube-api-access-nnw9n\") pod \"dns-operator-744455d44c-frvxs\" (UID: \"06b21306-4b72-47d2-b814-9b4b333295bb\") " pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.276795 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.277388 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.777365295 +0000 UTC m=+142.853152153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.283296 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg5d\" (UniqueName: \"kubernetes.io/projected/a9bd5896-2df2-4367-be33-9891f0bc67aa-kube-api-access-5rg5d\") pod \"catalog-operator-68c6474976-jzfnc\" (UID: \"a9bd5896-2df2-4367-be33-9891f0bc67aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.303951 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nztt6\" (UniqueName: \"kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6\") pod \"collect-profiles-29330670-5868z\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.322540 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbf3be08-1d9c-4f7a-af37-48a8f2aa1159-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x57mw\" (UID: \"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.343807 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.372765 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrbw\" (UniqueName: \"kubernetes.io/projected/b3ccf7f5-1756-4f98-8b76-fe7f9ae77075-kube-api-access-rjrbw\") pod \"router-default-5444994796-l4pwr\" (UID: \"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075\") " pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.377594 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.377815 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.877799799 +0000 UTC m=+142.953586637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.384517 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xg62\" (UniqueName: \"kubernetes.io/projected/1b35c5bc-b3c2-4109-928a-3d1898fdca29-kube-api-access-7xg62\") pod \"migrator-59844c95c7-2p5j7\" (UID: \"1b35c5bc-b3c2-4109-928a-3d1898fdca29\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.390242 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" event={"ID":"06411252-fabf-416c-8b3f-3cb830b235f4","Type":"ContainerStarted","Data":"659cb54e304323ad95953132a0c6a0e35cefb0c6cbd3b6cae117fcf199bb50dc"} Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.391159 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" event={"ID":"a74a950f-a98b-45c9-bdd0-0cdda261396f","Type":"ContainerStarted","Data":"969dc18ff4abc57d7d3525a167ffd61e8e24fb42be93fedab8952660fa937db1"} Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.401111 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.406388 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.413761 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.423262 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drc7k\" (UniqueName: \"kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k\") pod \"marketplace-operator-79b997595-fqbxr\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.431821 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-certs\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.434494 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.449092 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.458293 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.462995 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.469634 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.478362 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.479360 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.479760 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:04.979739993 +0000 UTC m=+143.055526941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.485178 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.486470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbdd\" (UniqueName: \"kubernetes.io/projected/97837bed-3f7c-4bf8-be43-550ea11c0a98-kube-api-access-mwbdd\") pod \"dns-default-wskr8\" (UID: \"97837bed-3f7c-4bf8-be43-550ea11c0a98\") " pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.492631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.505278 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czg2n\" (UniqueName: \"kubernetes.io/projected/c7b3c652-baa1-4549-9b0b-974f430b56dd-kube-api-access-czg2n\") pod \"csi-hostpathplugin-9qlzn\" (UID: \"c7b3c652-baa1-4549-9b0b-974f430b56dd\") " pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.509007 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.523373 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.546870 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.558071 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1234432-773a-4c0b-99db-d37df59ec9b6-node-bootstrap-token\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.558659 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6jc\" (UniqueName: \"kubernetes.io/projected/f1234432-773a-4c0b-99db-d37df59ec9b6-kube-api-access-rf6jc\") pod \"machine-config-server-m97mg\" (UID: \"f1234432-773a-4c0b-99db-d37df59ec9b6\") " pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.571847 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcx7\" (UniqueName: \"kubernetes.io/projected/c495c046-5e10-4020-a686-2834edbe289e-kube-api-access-7jcx7\") pod \"ingress-canary-cf5dj\" (UID: \"c495c046-5e10-4020-a686-2834edbe289e\") " pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.580914 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.581094 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.081073962 +0000 UTC m=+143.156860810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.581305 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.581619 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.081608186 +0000 UTC m=+143.157395024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.682334 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.682471 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.182445601 +0000 UTC m=+143.258232439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.682796 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.683114 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.183107069 +0000 UTC m=+143.258893907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.732867 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv"] Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.784072 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.784210 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.284183081 +0000 UTC m=+143.359969929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.784347 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.784616 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.284605722 +0000 UTC m=+143.360392560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.820668 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m97mg" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.855429 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cf5dj" Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.885323 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.885488 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.385468407 +0000 UTC m=+143.461255245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.885559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.885853 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.385845148 +0000 UTC m=+143.461631986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.986491 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.986659 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.4866286 +0000 UTC m=+143.562415438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:04 crc kubenswrapper[5024]: I1007 12:30:04.986876 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:04 crc kubenswrapper[5024]: E1007 12:30:04.987207 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.487193366 +0000 UTC m=+143.562980204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.087817 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.088120 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.588096372 +0000 UTC m=+143.663883210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.088377 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.088860 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.588849383 +0000 UTC m=+143.664636221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.102881 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17854bb6_7bec_4972_92a8_299702642b45.slice/crio-e7f3a9bc78ca99dbce6dc0d34993362b18a92a2108a04ff8bbcaff6e11c96000 WatchSource:0}: Error finding container e7f3a9bc78ca99dbce6dc0d34993362b18a92a2108a04ff8bbcaff6e11c96000: Status 404 returned error can't find the container with id e7f3a9bc78ca99dbce6dc0d34993362b18a92a2108a04ff8bbcaff6e11c96000 Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.189620 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.190290 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.690275954 +0000 UTC m=+143.766062792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.274669 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wp2gk"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.291446 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.291791 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.791772386 +0000 UTC m=+143.867559224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.392158 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.392220 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.89220461 +0000 UTC m=+143.967991448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.392607 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.392964 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.892956531 +0000 UTC m=+143.968743369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.395293 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" event={"ID":"fd6f037f-c253-488b-9386-19aa7fab7fec","Type":"ContainerStarted","Data":"95792c4dae580c6881f756332d02590eef25bba75f7c60a0e4ba506996148fba"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.397727 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9t72d" event={"ID":"f631c93e-2066-410d-bfcb-232ee1cced2a","Type":"ContainerStarted","Data":"1a6152534f14a018a6fbb101e18719e2c0c1d18db504037d44317d595d257a0f"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.398695 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerStarted","Data":"e3514c55b80f113d2c961b840865bba33d7051fbefa32eddb5e7879b53dca115"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.399515 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" event={"ID":"f2327164-7135-4dd2-bc42-dd68cefdb772","Type":"ContainerStarted","Data":"d1b71e749594d802d8596005e7eea38410de0f8d1febbf760fbec6bb4c3c507a"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.400381 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" event={"ID":"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29","Type":"ContainerStarted","Data":"91ef45b73b847577366d208e1566fe4ae1fb35690b8fa401229d2669e192df3b"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.401566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" event={"ID":"17854bb6-7bec-4972-92a8-299702642b45","Type":"ContainerStarted","Data":"e7f3a9bc78ca99dbce6dc0d34993362b18a92a2108a04ff8bbcaff6e11c96000"} Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.493822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.494058 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.994026301 +0000 UTC m=+144.069813169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.494181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.494480 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:05.994463674 +0000 UTC m=+144.070250512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.523992 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.598751 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.599053 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.099036881 +0000 UTC m=+144.174823709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.627691 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.634027 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9rwzm"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.643283 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.648196 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.694222 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.696537 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.701321 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.701685 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.201673596 +0000 UTC m=+144.277460434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.724349 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zh82b"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.755790 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw"] Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.806918 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.807099 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.307055486 +0000 UTC m=+144.382842324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.807171 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.807807 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.307797746 +0000 UTC m=+144.383584584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.860568 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.873946 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ccf7f5_1756_4f98_8b76_fe7f9ae77075.slice/crio-ecf9445d81af87a99880b410c2efffea435609b0b1a23fb6534b2ee5dc524740 WatchSource:0}: Error finding container ecf9445d81af87a99880b410c2efffea435609b0b1a23fb6534b2ee5dc524740: Status 404 returned error can't find the container with id ecf9445d81af87a99880b410c2efffea435609b0b1a23fb6534b2ee5dc524740 Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.875900 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bbb41d_c515_43d3_8e35_a73bed39e840.slice/crio-d179bb5b5af486a23a328ac96db9cc4bf71d1636ad9670fe13a71af618c507bc WatchSource:0}: Error finding container d179bb5b5af486a23a328ac96db9cc4bf71d1636ad9670fe13a71af618c507bc: Status 404 returned error can't find the container with id d179bb5b5af486a23a328ac96db9cc4bf71d1636ad9670fe13a71af618c507bc Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.879265 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7300a892_62e1_4ec9_b0c0_83d0aaf90bd3.slice/crio-d4613d8bc2b87007fed505f87d503a326baa0e81356a556989e423d2ef3f6666 WatchSource:0}: Error finding container d4613d8bc2b87007fed505f87d503a326baa0e81356a556989e423d2ef3f6666: Status 404 returned error can't find the container with id d4613d8bc2b87007fed505f87d503a326baa0e81356a556989e423d2ef3f6666 Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.889384 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfecdd47_3bdf_4f99_b34b_dbe793b59717.slice/crio-f1f07feb0191f089b5b772129885d2e192db1e3275d8cdf24e41576694da76fd WatchSource:0}: Error finding container f1f07feb0191f089b5b772129885d2e192db1e3275d8cdf24e41576694da76fd: Status 404 returned error can't find the container with id f1f07feb0191f089b5b772129885d2e192db1e3275d8cdf24e41576694da76fd Oct 07 12:30:05 crc kubenswrapper[5024]: I1007 12:30:05.909233 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:05 crc kubenswrapper[5024]: E1007 12:30:05.909516 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.409496684 +0000 UTC m=+144.485283522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:05 crc kubenswrapper[5024]: W1007 12:30:05.983591 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b42db20_9e0c_4afa_b850_4f0e485b17e8.slice/crio-5272f613e542844a6ca6486fe9cfa9400353ac73ebe5330fb6193d62fe87bac6 WatchSource:0}: Error finding container 5272f613e542844a6ca6486fe9cfa9400353ac73ebe5330fb6193d62fe87bac6: Status 404 returned error can't find the container with id 5272f613e542844a6ca6486fe9cfa9400353ac73ebe5330fb6193d62fe87bac6 Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.011159 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.011615 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.511601114 +0000 UTC m=+144.587387952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.083290 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.087887 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.091554 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-whw6l"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.097407 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xtwlc"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.113795 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.114221 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.614202537 +0000 UTC m=+144.689989375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.215495 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.215908 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.715894825 +0000 UTC m=+144.791681663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.272288 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.274576 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.276483 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hflv2"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.278326 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.306965 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wskr8"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.316022 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.316354 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nv9ml"] Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.316593 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.816572775 +0000 UTC m=+144.892359613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.317019 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.317413 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.817405028 +0000 UTC m=+144.893191866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.319983 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6f06beb_72aa_499a_a760_d36404bca577.slice/crio-fd4f470f2e3ab4c5794305560491948fa2c3b788af5286087300bb5f94850f12 WatchSource:0}: Error finding container fd4f470f2e3ab4c5794305560491948fa2c3b788af5286087300bb5f94850f12: Status 404 returned error can't find the container with id fd4f470f2e3ab4c5794305560491948fa2c3b788af5286087300bb5f94850f12 Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.320207 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.329093 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9qlzn"] Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.333606 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3096e7a7_c9b4_47ab_8336_b87e49e4521b.slice/crio-bde9841b81a03207e2193aac767eb69bb29d8ebc0776319dc25cceb67a3065bd WatchSource:0}: Error finding container bde9841b81a03207e2193aac767eb69bb29d8ebc0776319dc25cceb67a3065bd: Status 404 returned error can't find the container with id bde9841b81a03207e2193aac767eb69bb29d8ebc0776319dc25cceb67a3065bd Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.335569 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0abe2fa8_3512_46b2_a738_682a833ae488.slice/crio-d5c9e146933ae38521912149e9c2a6835591e2a48d35d22c7741387545332dcc WatchSource:0}: Error finding container d5c9e146933ae38521912149e9c2a6835591e2a48d35d22c7741387545332dcc: Status 404 returned error can't find the container with id d5c9e146933ae38521912149e9c2a6835591e2a48d35d22c7741387545332dcc Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.364088 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddacf4124_e72b_4d0f_8e4b_f43b083275b8.slice/crio-f6a7fee4bfd0683980ebf8133e61c4066d703adc3688016c77440bb1a36e5534 WatchSource:0}: Error finding container f6a7fee4bfd0683980ebf8133e61c4066d703adc3688016c77440bb1a36e5534: Status 404 returned error can't find the container with id f6a7fee4bfd0683980ebf8133e61c4066d703adc3688016c77440bb1a36e5534 Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.371156 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.378228 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.412753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" event={"ID":"a74a950f-a98b-45c9-bdd0-0cdda261396f","Type":"ContainerStarted","Data":"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.418193 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.419074 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.919045105 +0000 UTC m=+144.994831933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.419254 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.419703 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:06.919696553 +0000 UTC m=+144.995483381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.428015 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" event={"ID":"a9bbb41d-c515-43d3-8e35-a73bed39e840","Type":"ContainerStarted","Data":"d179bb5b5af486a23a328ac96db9cc4bf71d1636ad9670fe13a71af618c507bc"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.434007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" event={"ID":"dacf4124-e72b-4d0f-8e4b-f43b083275b8","Type":"ContainerStarted","Data":"f6a7fee4bfd0683980ebf8133e61c4066d703adc3688016c77440bb1a36e5534"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.436741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" event={"ID":"0abe2fa8-3512-46b2-a738-682a833ae488","Type":"ContainerStarted","Data":"d5c9e146933ae38521912149e9c2a6835591e2a48d35d22c7741387545332dcc"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.437694 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" event={"ID":"7b42db20-9e0c-4afa-b850-4f0e485b17e8","Type":"ContainerStarted","Data":"5272f613e542844a6ca6486fe9cfa9400353ac73ebe5330fb6193d62fe87bac6"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.438611 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" event={"ID":"1b35c5bc-b3c2-4109-928a-3d1898fdca29","Type":"ContainerStarted","Data":"68c21ab1b7b6a603bdb1d6cc9c0d0a58fd272fa4b190004ea17f18a660136d01"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.439506 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" event={"ID":"d1d4add5-c462-4ccc-8c65-8efd72b99637","Type":"ContainerStarted","Data":"96891db92c6ce590f3f06782a6feca2b663bae9bdfc178bb958f8c64a751ced2"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.441291 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" event={"ID":"bc6b7685-5713-4da2-a9bf-bb61144e3561","Type":"ContainerStarted","Data":"927e8837ea88910f4c034a7362b906a03482c06d29ed31698b447c51a305071c"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.447927 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" event={"ID":"d52d4285-2b77-4a88-8f02-add6e0de37ff","Type":"ContainerStarted","Data":"50f4b20e40c9142385293404e749160d1066a5d352fcde06dd9b95987c9ea213"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.453375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" event={"ID":"6bb2be95-3593-4045-8dca-353189946a2f","Type":"ContainerStarted","Data":"e4674cdee45de2d82e78411f4a20dbed59d61b0ef52049eb08eff87953acf842"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.475749 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wskr8" event={"ID":"97837bed-3f7c-4bf8-be43-550ea11c0a98","Type":"ContainerStarted","Data":"e11f59da59972d0ef8245b9f98347b885c2a088b551b81bac5133ca2ab11de11"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.479238 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" event={"ID":"c7b3c652-baa1-4549-9b0b-974f430b56dd","Type":"ContainerStarted","Data":"f8bbc22b355419efd2a170c42f164351998d55c62c02984fe57cddcd197e6e6f"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.480404 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" event={"ID":"f324f3c7-44fa-473c-8b60-ea30be3b7045","Type":"ContainerStarted","Data":"af5845f5793dbacd259bd20894aff6d9962eff4c415e592a03f77ee1a2f8a196"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.486772 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" event={"ID":"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3","Type":"ContainerStarted","Data":"d4613d8bc2b87007fed505f87d503a326baa0e81356a556989e423d2ef3f6666"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.487688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" event={"ID":"cfecdd47-3bdf-4f99-b34b-dbe793b59717","Type":"ContainerStarted","Data":"f1f07feb0191f089b5b772129885d2e192db1e3275d8cdf24e41576694da76fd"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.488375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" event={"ID":"6047f06e-4b55-4a39-be9c-6341c8cf7082","Type":"ContainerStarted","Data":"87209008f77f8cc104bd99a05dc95ddc95ef640b89956ed813a0b4b2c4884507"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.489642 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l4pwr" event={"ID":"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075","Type":"ContainerStarted","Data":"ecf9445d81af87a99880b410c2efffea435609b0b1a23fb6534b2ee5dc524740"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.491882 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" event={"ID":"306952da-e05e-468a-8e44-5cc64940f7f6","Type":"ContainerStarted","Data":"178c0f2a5876eaafc41b933e3be0b9baec633ebfd469ed7b5a46de66f4c37c6d"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.496700 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" event={"ID":"3096e7a7-c9b4-47ab-8336-b87e49e4521b","Type":"ContainerStarted","Data":"bde9841b81a03207e2193aac767eb69bb29d8ebc0776319dc25cceb67a3065bd"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.498372 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" event={"ID":"de462421-9ecd-4bd7-9b00-f054da067ca6","Type":"ContainerStarted","Data":"b41eb6d51fbc77a3153bb1a0929e852a0409109ae42d5eee184e2785ad66451c"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.503541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" event={"ID":"24a251d4-749c-4d2d-9fb5-8bd2330d7b35","Type":"ContainerStarted","Data":"c3ecc85a7972e27bbe3a35679396a6a81064d3415cb3cd8565be36e4ecbb5a0f"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.504491 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" event={"ID":"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c","Type":"ContainerStarted","Data":"bb0bbd3a85f91a1e85f952e991704adc3e21c3f06cecdc21ea6f0b7b2f936482"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.505255 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" event={"ID":"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159","Type":"ContainerStarted","Data":"e277273c067c3da6e531018e92c641e04b9d540267249c04bef5936926ef5899"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.505991 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" event={"ID":"c6f06beb-72aa-499a-a760-d36404bca577","Type":"ContainerStarted","Data":"fd4f470f2e3ab4c5794305560491948fa2c3b788af5286087300bb5f94850f12"} Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.521027 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.521123 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.021087603 +0000 UTC m=+145.096874441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.521355 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.521702 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.02169277 +0000 UTC m=+145.097479608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.545272 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs"] Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.593020 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode31ac6bf_7d41_4b00_a89d_eb64ebc0e9f1.slice/crio-e67cdfa06a89a668e653a482d4f37fa832658875d5220accb418f152cc9134bf WatchSource:0}: Error finding container e67cdfa06a89a668e653a482d4f37fa832658875d5220accb418f152cc9134bf: Status 404 returned error can't find the container with id e67cdfa06a89a668e653a482d4f37fa832658875d5220accb418f152cc9134bf Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.623246 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.623712 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.123642405 +0000 UTC m=+145.199429243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.654451 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cf5dj"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.724901 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.725184 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.225173638 +0000 UTC m=+145.300960476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: W1007 12:30:06.749494 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc495c046_5e10_4020_a686_2834edbe289e.slice/crio-3f91cf3f5a81e3602ba2521602e2a9e95dafd4e3a973548dc88266c2ffe74467 WatchSource:0}: Error finding container 3f91cf3f5a81e3602ba2521602e2a9e95dafd4e3a973548dc88266c2ffe74467: Status 404 returned error can't find the container with id 3f91cf3f5a81e3602ba2521602e2a9e95dafd4e3a973548dc88266c2ffe74467 Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.826268 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.826383 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.326359913 +0000 UTC m=+145.402146751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.826440 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.826741 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.326733523 +0000 UTC m=+145.402520361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.931158 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.931369 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.431326831 +0000 UTC m=+145.507113699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.932199 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:06 crc kubenswrapper[5024]: E1007 12:30:06.932691 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.432669299 +0000 UTC m=+145.508456137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.989723 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:30:06 crc kubenswrapper[5024]: I1007 12:30:06.995229 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m"] Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.005950 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc"] Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.011650 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-frvxs"] Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.034026 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.034512 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.53448841 +0000 UTC m=+145.610275238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: W1007 12:30:07.127837 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod447bfecb_a799_47cc_ad14_2a10bc594d95.slice/crio-a6ee43ef314b80752e276a4df270ceae2b9d672acaaa928eb547767d681f40bd WatchSource:0}: Error finding container a6ee43ef314b80752e276a4df270ceae2b9d672acaaa928eb547767d681f40bd: Status 404 returned error can't find the container with id a6ee43ef314b80752e276a4df270ceae2b9d672acaaa928eb547767d681f40bd Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.135330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.135638 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.635628763 +0000 UTC m=+145.711415601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.236601 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.237382 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.737349051 +0000 UTC m=+145.813135929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.338970 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.339687 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.839658066 +0000 UTC m=+145.915444924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.440777 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.441507 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.941485448 +0000 UTC m=+146.017272296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.441672 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.442076 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:07.942064134 +0000 UTC m=+146.017850982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.511708 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" event={"ID":"0532ff61-84a7-44b0-b8d3-d6ffad413de5","Type":"ContainerStarted","Data":"675972bd8f68b13c027c9946fbbb3cca77bcd99c7421beab2de23179963667ff"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.513995 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" event={"ID":"fd6f037f-c253-488b-9386-19aa7fab7fec","Type":"ContainerStarted","Data":"47c3b3c802dd71466a3e46866308f33783de24626d3417d3adb89619c2173fd6"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.514992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" event={"ID":"bf226d94-0733-4205-9790-7590b441dac9","Type":"ContainerStarted","Data":"c77a803b88ef818699f1ef41223b74b6756885db9602774e855f9b6a6bd9dc49"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.515700 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cf5dj" event={"ID":"c495c046-5e10-4020-a686-2834edbe289e","Type":"ContainerStarted","Data":"3f91cf3f5a81e3602ba2521602e2a9e95dafd4e3a973548dc88266c2ffe74467"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.516691 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" event={"ID":"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1","Type":"ContainerStarted","Data":"e67cdfa06a89a668e653a482d4f37fa832658875d5220accb418f152cc9134bf"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.518704 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" event={"ID":"1c479290-3870-4f83-b3e6-a86e91bda22e","Type":"ContainerStarted","Data":"1694fac9bd65e4e485e540a0991f5734f007d6ed801e53ea1a8609f2e9ebc680"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.519925 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" event={"ID":"06411252-fabf-416c-8b3f-3cb830b235f4","Type":"ContainerStarted","Data":"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.521099 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerStarted","Data":"b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.522097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m97mg" event={"ID":"f1234432-773a-4c0b-99db-d37df59ec9b6","Type":"ContainerStarted","Data":"fc128773869933ee3b1d7212d393e46fb9d08dec3ec1e71a86004c36fb7ff6eb"} Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.524113 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" event={"ID":"447bfecb-a799-47cc-ad14-2a10bc594d95","Type":"ContainerStarted","Data":"a6ee43ef314b80752e276a4df270ceae2b9d672acaaa928eb547767d681f40bd"} Oct 07 12:30:07 crc kubenswrapper[5024]: W1007 12:30:07.525121 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5da4f2d_db84_4a91_8f7e_7843f4062df6.slice/crio-9617de9b1980102fcd47241e87f2a0a1fbd73e0a89e5ea97f7f11d327eb60278 WatchSource:0}: Error finding container 9617de9b1980102fcd47241e87f2a0a1fbd73e0a89e5ea97f7f11d327eb60278: Status 404 returned error can't find the container with id 9617de9b1980102fcd47241e87f2a0a1fbd73e0a89e5ea97f7f11d327eb60278 Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.543184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.543588 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.043543426 +0000 UTC m=+146.119330304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.645193 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.645664 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.145640156 +0000 UTC m=+146.221427074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.746115 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.746308 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.246279545 +0000 UTC m=+146.322066383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.746448 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.746741 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.246732797 +0000 UTC m=+146.322519625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.847860 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.848030 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.348007614 +0000 UTC m=+146.423794452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.848258 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.848545 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.348535868 +0000 UTC m=+146.424322796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:07 crc kubenswrapper[5024]: I1007 12:30:07.949713 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:07 crc kubenswrapper[5024]: E1007 12:30:07.950213 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.450193036 +0000 UTC m=+146.525979874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.051642 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.051976 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.551932135 +0000 UTC m=+146.627718973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.153253 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.153406 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.653380116 +0000 UTC m=+146.729166954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.153654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.153937 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.653924301 +0000 UTC m=+146.729711139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.254968 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.255199 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.755170837 +0000 UTC m=+146.830957665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.255348 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.255720 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.755707802 +0000 UTC m=+146.831494640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.356969 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.357198 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.857172264 +0000 UTC m=+146.932959102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.357622 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.357978 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.857970716 +0000 UTC m=+146.933757554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.458279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.458434 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.95841725 +0000 UTC m=+147.034204078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.458570 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.458983 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:08.958960585 +0000 UTC m=+147.034747433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.531765 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" event={"ID":"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29","Type":"ContainerStarted","Data":"772b5ea51d855f5052e1a09f8c49a5d466f8658673b57324010c95c31684cb21"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.533447 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9t72d" event={"ID":"f631c93e-2066-410d-bfcb-232ee1cced2a","Type":"ContainerStarted","Data":"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.534449 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" event={"ID":"c5da4f2d-db84-4a91-8f7e-7843f4062df6","Type":"ContainerStarted","Data":"9617de9b1980102fcd47241e87f2a0a1fbd73e0a89e5ea97f7f11d327eb60278"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.535311 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" event={"ID":"06b21306-4b72-47d2-b814-9b4b333295bb","Type":"ContainerStarted","Data":"46773ee2e506cbc3f0726a52399b35d4bc00e9295d206a2e2f2ab14a3ab3c041"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.536827 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" event={"ID":"f2327164-7135-4dd2-bc42-dd68cefdb772","Type":"ContainerStarted","Data":"3a620b28332cacb3d568caf4a5309664e4e24c4d1579634c5e38d8abf31eb55c"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.538344 5024 generic.go:334] "Generic (PLEG): container finished" podID="fd6f037f-c253-488b-9386-19aa7fab7fec" containerID="47c3b3c802dd71466a3e46866308f33783de24626d3417d3adb89619c2173fd6" exitCode=0 Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.538418 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" event={"ID":"fd6f037f-c253-488b-9386-19aa7fab7fec","Type":"ContainerDied","Data":"47c3b3c802dd71466a3e46866308f33783de24626d3417d3adb89619c2173fd6"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.539433 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" event={"ID":"a9bd5896-2df2-4367-be33-9891f0bc67aa","Type":"ContainerStarted","Data":"d32784d61b096c3a28f04042b73cf4603d081f6fc20fd11d40c56ead1a452186"} Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.539747 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.540966 5024 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8vpl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.541016 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.554324 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" podStartSLOduration=125.554304107 podStartE2EDuration="2m5.554304107s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:08.553813454 +0000 UTC m=+146.629600292" watchObservedRunningTime="2025-10-07 12:30:08.554304107 +0000 UTC m=+146.630090945" Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.559469 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.559606 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.059589203 +0000 UTC m=+147.135376041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.559773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.560129 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.060118078 +0000 UTC m=+147.135904916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.672884 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.673735 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.173719215 +0000 UTC m=+147.249506053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.776417 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.776788 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.276774751 +0000 UTC m=+147.352561589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.877486 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.877722 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.377696418 +0000 UTC m=+147.453483256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.877844 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.878251 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.378238313 +0000 UTC m=+147.454025141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.979378 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.979533 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.479509579 +0000 UTC m=+147.555296417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:08 crc kubenswrapper[5024]: I1007 12:30:08.979757 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:08 crc kubenswrapper[5024]: E1007 12:30:08.980104 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.480096695 +0000 UTC m=+147.555883533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.080655 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.080851 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.580813906 +0000 UTC m=+147.656600784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.081039 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.081427 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.581411553 +0000 UTC m=+147.657198481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.181983 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.182097 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.682076833 +0000 UTC m=+147.757863681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.182366 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.182714 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.68270432 +0000 UTC m=+147.758491158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.283823 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.284096 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.784070789 +0000 UTC m=+147.859857617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.386571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.387015 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.886994972 +0000 UTC m=+147.962781900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.487689 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.987671872 +0000 UTC m=+148.063458710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.487723 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.487986 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.488279 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:09.988272698 +0000 UTC m=+148.064059536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.559679 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" event={"ID":"24a251d4-749c-4d2d-9fb5-8bd2330d7b35","Type":"ContainerStarted","Data":"aa1a74219ce26d72b4d6e8e93a7b7a1ac4c086f3888d36715a609c47200b8611"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.561850 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" event={"ID":"7300a892-62e1-4ec9-b0c0-83d0aaf90bd3","Type":"ContainerStarted","Data":"c0b0290cfdecbe41543e79d79de445f85760433e18686c810400d0af3e378a8f"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.563800 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" event={"ID":"3c3167bb-dbe2-42bf-8693-cf28b7a9a28c","Type":"ContainerStarted","Data":"71d3a8f3419b8d4d5c4dd39a17e262238f539f26c15049e414e5d2ff39631fd9"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.566244 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" event={"ID":"306952da-e05e-468a-8e44-5cc64940f7f6","Type":"ContainerStarted","Data":"602c7c58c44602171f25ebfa79e03980421e2dc8a71ada06e78ddd56040fc124"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.568252 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" event={"ID":"17854bb6-7bec-4972-92a8-299702642b45","Type":"ContainerStarted","Data":"8b5ba3af25ae3710d5e55a27d17e02572d228a2c840e89cefd76a341cd47bfcb"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.569977 5024 generic.go:334] "Generic (PLEG): container finished" podID="a55e7ecf-f2fa-4e64-af0c-c7a0651ded29" containerID="772b5ea51d855f5052e1a09f8c49a5d466f8658673b57324010c95c31684cb21" exitCode=0 Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.570037 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" event={"ID":"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29","Type":"ContainerDied","Data":"772b5ea51d855f5052e1a09f8c49a5d466f8658673b57324010c95c31684cb21"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.574245 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wskr8" event={"ID":"97837bed-3f7c-4bf8-be43-550ea11c0a98","Type":"ContainerStarted","Data":"515cd613c495733854566a4625e96a4d4e4ff38325e2da73cc499bf7e90b6191"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.575951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l4pwr" event={"ID":"b3ccf7f5-1756-4f98-8b76-fe7f9ae77075","Type":"ContainerStarted","Data":"eda6c4bd2afd1b79fe284bc93fdabf85c0b9a478c9ece9920cec00d6c704a20d"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.576962 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" event={"ID":"6047f06e-4b55-4a39-be9c-6341c8cf7082","Type":"ContainerStarted","Data":"edd8ce65de0c69e9a39f182b6a15b4d41e58dce75a914b2a7463e80b59c65694"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.578826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" event={"ID":"a9bbb41d-c515-43d3-8e35-a73bed39e840","Type":"ContainerStarted","Data":"8586452eadfed8bd3edfea30fcb3937119c70ee647cbc02f42ab84cfd1add67a"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.579909 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" event={"ID":"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1","Type":"ContainerStarted","Data":"c8ca4760edc2d3458542a8677883e8b93445192d915fcadae470d57737e2b3ba"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.582766 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" event={"ID":"bbf3be08-1d9c-4f7a-af37-48a8f2aa1159","Type":"ContainerStarted","Data":"5c70444a69a8594c8764619930df810a86ef58ec0d5f9e9a1dccd8674995359e"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.584963 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" event={"ID":"bc6b7685-5713-4da2-a9bf-bb61144e3561","Type":"ContainerStarted","Data":"f6fbca36a5dc8826e9d2fa630511c8d09a9436a989834c5554118cfc8e84e0e4"} Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.587005 5024 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8vpl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.587044 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.587180 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.587203 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.589218 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.589247 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.589295 5024 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kw42v container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.589316 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.589587 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.589963 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.089946556 +0000 UTC m=+148.165733394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.626937 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s266g" podStartSLOduration=126.626914397 podStartE2EDuration="2m6.626914397s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:09.625328583 +0000 UTC m=+147.701115421" watchObservedRunningTime="2025-10-07 12:30:09.626914397 +0000 UTC m=+147.702701235" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.628300 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wh6d2" podStartSLOduration=126.628293145 podStartE2EDuration="2m6.628293145s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:09.608048406 +0000 UTC m=+147.683835244" watchObservedRunningTime="2025-10-07 12:30:09.628293145 +0000 UTC m=+147.704079983" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.648211 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" podStartSLOduration=126.648171514 podStartE2EDuration="2m6.648171514s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:09.647632159 +0000 UTC m=+147.723418997" watchObservedRunningTime="2025-10-07 12:30:09.648171514 +0000 UTC m=+147.723958352" Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.690710 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.692334 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.192315193 +0000 UTC m=+148.268102031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.792874 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.793032 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.292991123 +0000 UTC m=+148.368777961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.793553 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.794021 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.294012271 +0000 UTC m=+148.369799109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.897720 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.897953 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.39790025 +0000 UTC m=+148.473687128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:09 crc kubenswrapper[5024]: I1007 12:30:09.898379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:09 crc kubenswrapper[5024]: E1007 12:30:09.899248 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.399228576 +0000 UTC m=+148.475015444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.000749 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.001015 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.500963596 +0000 UTC m=+148.576750444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.001875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.002432 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.502421486 +0000 UTC m=+148.578208324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.103529 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.103840 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.603800595 +0000 UTC m=+148.679587433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.104086 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.104517 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.604505675 +0000 UTC m=+148.680292513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.205532 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.205848 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.705830633 +0000 UTC m=+148.781617471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.307412 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.307908 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.807885971 +0000 UTC m=+148.883672809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.408636 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.408841 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.908816838 +0000 UTC m=+148.984603676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.409331 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.409663 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:10.909653461 +0000 UTC m=+148.985440299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.509970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.510293 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.01027874 +0000 UTC m=+149.086065578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.589809 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" event={"ID":"1b35c5bc-b3c2-4109-928a-3d1898fdca29","Type":"ContainerStarted","Data":"764ea98843c38327550334d2e209be8f8c06aa84fc3455d998befd0441f5a58b"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.591016 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" event={"ID":"c6f06beb-72aa-499a-a760-d36404bca577","Type":"ContainerStarted","Data":"3ff2b0fa8e865c26dfb28ceb1649c2c2af5a2d0714f87b5904210afd02e2a4c4"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.592003 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" event={"ID":"f324f3c7-44fa-473c-8b60-ea30be3b7045","Type":"ContainerStarted","Data":"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.592966 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" event={"ID":"d52d4285-2b77-4a88-8f02-add6e0de37ff","Type":"ContainerStarted","Data":"45f6b9dcdde879177929698aeb0e85c2885a2c98d8676e3f940d38038b2aa047"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.593870 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" event={"ID":"de462421-9ecd-4bd7-9b00-f054da067ca6","Type":"ContainerStarted","Data":"ffc0ff5308d46a99e23e426e5ade8284832aedc2ffaee0287dbea5894f57e96e"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.594847 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m97mg" event={"ID":"f1234432-773a-4c0b-99db-d37df59ec9b6","Type":"ContainerStarted","Data":"106e1778f42965d09397c882cf8dd5af22b785bdc35d71d9ba01b03d6daf9351"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.595848 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" event={"ID":"0abe2fa8-3512-46b2-a738-682a833ae488","Type":"ContainerStarted","Data":"a2f5f61f707b64482bcc6dcdc78e7fed46c56ca9ea8bd539777c8ac9349be114"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.596816 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" event={"ID":"a9bd5896-2df2-4367-be33-9891f0bc67aa","Type":"ContainerStarted","Data":"8cb7f0d7e2435f5371c00002ed8f802e51702deb409f35b0c4f235f87b3ee3e0"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.598189 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" event={"ID":"bf226d94-0733-4205-9790-7590b441dac9","Type":"ContainerStarted","Data":"89e5eb829251034c79be8f62f5b4452cffa9c846f40f0f60f5d7e60732bf7c78"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.599107 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" event={"ID":"c5da4f2d-db84-4a91-8f7e-7843f4062df6","Type":"ContainerStarted","Data":"721122e79e7d45913d3b94f32c22299442687ce59a0d6b23ab99e95f2128eecd"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.600022 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" event={"ID":"3096e7a7-c9b4-47ab-8336-b87e49e4521b","Type":"ContainerStarted","Data":"edef94373ddc926c48c755f65a2c377ca092d19345855db6f1939fa535b97e87"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.601377 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" event={"ID":"d1d4add5-c462-4ccc-8c65-8efd72b99637","Type":"ContainerStarted","Data":"71d1cdfef292aa95b138f6eeea25c344e625cda8969d925a7ba1321d100b78df"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.602520 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" event={"ID":"447bfecb-a799-47cc-ad14-2a10bc594d95","Type":"ContainerStarted","Data":"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.603463 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" event={"ID":"1c479290-3870-4f83-b3e6-a86e91bda22e","Type":"ContainerStarted","Data":"7357715ad7e6228ac9c568a90749db7af6d4468c6abe1bfc7bb409f24d59c43e"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.604539 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cf5dj" event={"ID":"c495c046-5e10-4020-a686-2834edbe289e","Type":"ContainerStarted","Data":"62ca3977c25326b0174c15a8ae5fa4cbe3ad39548f88e7dbaf4b692ecffb0e16"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.605957 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" event={"ID":"7b42db20-9e0c-4afa-b850-4f0e485b17e8","Type":"ContainerStarted","Data":"75d29e49682d87c4f0fc470074ef4f29937f96b3fef45ed22af98a2a60743f17"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.607080 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" event={"ID":"dacf4124-e72b-4d0f-8e4b-f43b083275b8","Type":"ContainerStarted","Data":"cd84e136dff5fad7efe5051a771e4383b05abeee0c6e22185c4c712b45c00335"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.608352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" event={"ID":"6bb2be95-3593-4045-8dca-353189946a2f","Type":"ContainerStarted","Data":"7043fb5b673cb07a56d5ebf11b2ab3c16d73ccf702dc54e5602064b431ce05a9"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.609488 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" event={"ID":"cfecdd47-3bdf-4f99-b34b-dbe793b59717","Type":"ContainerStarted","Data":"9f78f976e5b6199947696602cb2245b7a81c389c6b77eb239442351806bc95e6"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.610804 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.610863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.610888 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.610917 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.610947 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.611226 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.111214997 +0000 UTC m=+149.187001835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.611543 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" event={"ID":"06b21306-4b72-47d2-b814-9b4b333295bb","Type":"ContainerStarted","Data":"92a1699778dc91376f45209a89baf9e751f4525e43b5638134b0ca9435838317"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.613730 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" event={"ID":"0532ff61-84a7-44b0-b8d3-d6ffad413de5","Type":"ContainerStarted","Data":"bcddc5806317dd060af7266f263935171470854340ba0a07b4d171f5ff7f5e68"} Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.614933 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.615007 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.615222 5024 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kw42v container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.615315 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.615663 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.620380 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.623090 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.624391 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.628583 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" podStartSLOduration=127.628555456 podStartE2EDuration="2m7.628555456s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.627662762 +0000 UTC m=+148.703449610" watchObservedRunningTime="2025-10-07 12:30:10.628555456 +0000 UTC m=+148.704342294" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.633440 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr9gr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.633502 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" podUID="306952da-e05e-468a-8e44-5cc64940f7f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.642931 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9t72d" podStartSLOduration=127.642912493 podStartE2EDuration="2m7.642912493s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.642703287 +0000 UTC m=+148.718490135" watchObservedRunningTime="2025-10-07 12:30:10.642912493 +0000 UTC m=+148.718699331" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.650224 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.657345 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" podStartSLOduration=127.657331241 podStartE2EDuration="2m7.657331241s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.654889723 +0000 UTC m=+148.730676561" watchObservedRunningTime="2025-10-07 12:30:10.657331241 +0000 UTC m=+148.733118079" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.670819 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qr4f5" podStartSLOduration=127.670802063 podStartE2EDuration="2m7.670802063s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.670480524 +0000 UTC m=+148.746267362" watchObservedRunningTime="2025-10-07 12:30:10.670802063 +0000 UTC m=+148.746588911" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.701600 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-l4pwr" podStartSLOduration=127.701586153 podStartE2EDuration="2m7.701586153s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.70112105 +0000 UTC m=+148.776907888" watchObservedRunningTime="2025-10-07 12:30:10.701586153 +0000 UTC m=+148.777372991" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.711513 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.711667 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.21163423 +0000 UTC m=+149.287421068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.712203 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.714452 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.214366086 +0000 UTC m=+149.290152924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.775787 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.790401 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.813235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.813432 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.313400759 +0000 UTC m=+149.389187597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.813803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.814273 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.314254103 +0000 UTC m=+149.390040941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.884084 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.915018 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.915272 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.415223081 +0000 UTC m=+149.491009919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:10 crc kubenswrapper[5024]: I1007 12:30:10.915583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:10 crc kubenswrapper[5024]: E1007 12:30:10.915920 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.41590547 +0000 UTC m=+149.491692308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.006978 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrdxk" podStartSLOduration=128.006956414 podStartE2EDuration="2m8.006956414s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:10.72760233 +0000 UTC m=+148.803389168" watchObservedRunningTime="2025-10-07 12:30:11.006956414 +0000 UTC m=+149.082743252" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.018542 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.018879 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.518862403 +0000 UTC m=+149.594649241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: W1007 12:30:11.019847 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-80364d49d4e17abe5ecfbc82ffaf3c6b4b633bc812ad12c89ddc822b60806a46 WatchSource:0}: Error finding container 80364d49d4e17abe5ecfbc82ffaf3c6b4b633bc812ad12c89ddc822b60806a46: Status 404 returned error can't find the container with id 80364d49d4e17abe5ecfbc82ffaf3c6b4b633bc812ad12c89ddc822b60806a46 Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.120110 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.120447 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.620432958 +0000 UTC m=+149.696219796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: W1007 12:30:11.137975 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-2c0cc1a15f67c286514f89b01ea641dcfc85a950500a3c51d9c984f306e82e9f WatchSource:0}: Error finding container 2c0cc1a15f67c286514f89b01ea641dcfc85a950500a3c51d9c984f306e82e9f: Status 404 returned error can't find the container with id 2c0cc1a15f67c286514f89b01ea641dcfc85a950500a3c51d9c984f306e82e9f Oct 07 12:30:11 crc kubenswrapper[5024]: W1007 12:30:11.198183 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-b36c0586e2a38bc48bcfc72e4c334cb8cbd3985215a815d87df4e1a1a7bac584 WatchSource:0}: Error finding container b36c0586e2a38bc48bcfc72e4c334cb8cbd3985215a815d87df4e1a1a7bac584: Status 404 returned error can't find the container with id b36c0586e2a38bc48bcfc72e4c334cb8cbd3985215a815d87df4e1a1a7bac584 Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.222576 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.222746 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.722706252 +0000 UTC m=+149.798493090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.222978 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.223406 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.723396161 +0000 UTC m=+149.799182999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.323722 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.323888 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.823866386 +0000 UTC m=+149.899653224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.324026 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.324327 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.824319088 +0000 UTC m=+149.900105926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.401305 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.402802 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.402846 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.425215 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.425350 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.925328587 +0000 UTC m=+150.001115425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.425467 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.425810 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:11.92579642 +0000 UTC m=+150.001583328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.525928 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.526126 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.02609371 +0000 UTC m=+150.101880558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.526253 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.526521 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.026510101 +0000 UTC m=+150.102296939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.620736 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b36c0586e2a38bc48bcfc72e4c334cb8cbd3985215a815d87df4e1a1a7bac584"} Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.621913 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2c0cc1a15f67c286514f89b01ea641dcfc85a950500a3c51d9c984f306e82e9f"} Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.622709 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"80364d49d4e17abe5ecfbc82ffaf3c6b4b633bc812ad12c89ddc822b60806a46"} Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.624465 5024 generic.go:334] "Generic (PLEG): container finished" podID="6bb2be95-3593-4045-8dca-353189946a2f" containerID="7043fb5b673cb07a56d5ebf11b2ab3c16d73ccf702dc54e5602064b431ce05a9" exitCode=0 Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.625257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" event={"ID":"6bb2be95-3593-4045-8dca-353189946a2f","Type":"ContainerDied","Data":"7043fb5b673cb07a56d5ebf11b2ab3c16d73ccf702dc54e5602064b431ce05a9"} Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.625995 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr9gr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.626033 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" podUID="306952da-e05e-468a-8e44-5cc64940f7f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.626660 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.626790 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.12677401 +0000 UTC m=+150.202560848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.626830 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.627165 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.12715192 +0000 UTC m=+150.202938758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.661640 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fdw8z" podStartSLOduration=128.661620362 podStartE2EDuration="2m8.661620362s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:11.661046226 +0000 UTC m=+149.736833064" watchObservedRunningTime="2025-10-07 12:30:11.661620362 +0000 UTC m=+149.737407200" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.662188 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9rwzm" podStartSLOduration=128.662184138 podStartE2EDuration="2m8.662184138s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:11.644024026 +0000 UTC m=+149.719810864" watchObservedRunningTime="2025-10-07 12:30:11.662184138 +0000 UTC m=+149.737970976" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.680515 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x57mw" podStartSLOduration=128.680496223 podStartE2EDuration="2m8.680496223s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:11.678673093 +0000 UTC m=+149.754459931" watchObservedRunningTime="2025-10-07 12:30:11.680496223 +0000 UTC m=+149.756283061" Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.728268 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.728411 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.228390526 +0000 UTC m=+150.304177364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.729761 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.730581 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.230573356 +0000 UTC m=+150.306360194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.831111 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.831281 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.331255417 +0000 UTC m=+150.407042255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.831474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.831757 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.33174881 +0000 UTC m=+150.407535648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.932803 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.932943 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.432914513 +0000 UTC m=+150.508701351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:11 crc kubenswrapper[5024]: I1007 12:30:11.933205 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:11 crc kubenswrapper[5024]: E1007 12:30:11.933545 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.433534781 +0000 UTC m=+150.509321699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.034595 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.034809 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.534771956 +0000 UTC m=+150.610558804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.136261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.136557 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.636544937 +0000 UTC m=+150.712331775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.236967 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.237318 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.737303779 +0000 UTC m=+150.813090617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.338873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.339123 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.83911318 +0000 UTC m=+150.914900018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.403346 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.403421 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.440242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.440642 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:12.940625843 +0000 UTC m=+151.016412681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.542252 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.542593 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.042581089 +0000 UTC m=+151.118367927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.630350 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" event={"ID":"fd6f037f-c253-488b-9386-19aa7fab7fec","Type":"ContainerStarted","Data":"f5fc3c3ca145deaf9d98ba4bd5b6fb8be5fab2682aca1fabe083ce5ab27337fe"} Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.631082 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.631217 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.632359 5024 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jzfnc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.632432 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" podUID="a9bd5896-2df2-4367-be33-9891f0bc67aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.632702 5024 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-25rxs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.632755 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.643800 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.644185 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.143962408 +0000 UTC m=+151.219749246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.644234 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.644606 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.144589236 +0000 UTC m=+151.220376164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.663559 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" podStartSLOduration=12.663542719 podStartE2EDuration="12.663542719s" podCreationTimestamp="2025-10-07 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.662420898 +0000 UTC m=+150.738207736" watchObservedRunningTime="2025-10-07 12:30:12.663542719 +0000 UTC m=+150.739329557" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.663645 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" podStartSLOduration=129.663642132 podStartE2EDuration="2m9.663642132s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.646617372 +0000 UTC m=+150.722404210" watchObservedRunningTime="2025-10-07 12:30:12.663642132 +0000 UTC m=+150.739428970" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.674936 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fw2t" podStartSLOduration=129.674917423 podStartE2EDuration="2m9.674917423s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.674389609 +0000 UTC m=+150.750176467" watchObservedRunningTime="2025-10-07 12:30:12.674917423 +0000 UTC m=+150.750704261" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.690785 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" podStartSLOduration=129.690762291 podStartE2EDuration="2m9.690762291s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.690333139 +0000 UTC m=+150.766119977" watchObservedRunningTime="2025-10-07 12:30:12.690762291 +0000 UTC m=+150.766549129" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.703771 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m97mg" podStartSLOduration=11.703750299 podStartE2EDuration="11.703750299s" podCreationTimestamp="2025-10-07 12:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.703640576 +0000 UTC m=+150.779427414" watchObservedRunningTime="2025-10-07 12:30:12.703750299 +0000 UTC m=+150.779537137" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.735443 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" podStartSLOduration=129.735425944 podStartE2EDuration="2m9.735425944s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.734863819 +0000 UTC m=+150.810650647" watchObservedRunningTime="2025-10-07 12:30:12.735425944 +0000 UTC m=+150.811212782" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.745929 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.746099 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.246077008 +0000 UTC m=+151.321863846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.746602 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.747308 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.247290392 +0000 UTC m=+151.323077220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.753731 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scskj" podStartSLOduration=129.753715599 podStartE2EDuration="2m9.753715599s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.751349794 +0000 UTC m=+150.827136632" watchObservedRunningTime="2025-10-07 12:30:12.753715599 +0000 UTC m=+150.829502437" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.767315 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zh82b" podStartSLOduration=129.767300484 podStartE2EDuration="2m9.767300484s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.765618848 +0000 UTC m=+150.841405686" watchObservedRunningTime="2025-10-07 12:30:12.767300484 +0000 UTC m=+150.843087322" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.779330 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nhfn7" podStartSLOduration=129.779310456 podStartE2EDuration="2m9.779310456s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.775264164 +0000 UTC m=+150.851051002" watchObservedRunningTime="2025-10-07 12:30:12.779310456 +0000 UTC m=+150.855097294" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.791665 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nv9ml" podStartSLOduration=129.791650007 podStartE2EDuration="2m9.791650007s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:12.790661189 +0000 UTC m=+150.866448037" watchObservedRunningTime="2025-10-07 12:30:12.791650007 +0000 UTC m=+150.867436845" Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.848856 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.849041 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.349018821 +0000 UTC m=+151.424805649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.849303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.849570 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.349563216 +0000 UTC m=+151.425350054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.949869 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.950031 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.45000861 +0000 UTC m=+151.525795448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:12 crc kubenswrapper[5024]: I1007 12:30:12.950066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:12 crc kubenswrapper[5024]: E1007 12:30:12.950373 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.450365539 +0000 UTC m=+151.526152377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.051691 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.051856 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.551830021 +0000 UTC m=+151.627616859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.051954 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.052310 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.552303324 +0000 UTC m=+151.628090162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.152573 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.152723 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.652693177 +0000 UTC m=+151.728480015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.152753 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.153061 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.653054817 +0000 UTC m=+151.728841645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.253737 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.254094 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.754077966 +0000 UTC m=+151.829864804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.336547 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.355630 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.355954 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.855942809 +0000 UTC m=+151.931729647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.403659 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.403728 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.456105 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.456292 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.956264529 +0000 UTC m=+152.032051367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.456449 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.457346 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:13.957332869 +0000 UTC m=+152.033119707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.460891 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.460944 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.460963 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.461073 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.485386 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.485434 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.487155 5024 patch_prober.go:28] interesting pod/console-f9d7485db-9t72d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.487196 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9t72d" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.557972 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.558218 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.058180334 +0000 UTC m=+152.133967172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.558410 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.559220 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.059206902 +0000 UTC m=+152.134993740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.573962 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.635667 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"15c8a796bf3aabb75828e74afd88e6eb9c1cd51c522a6d32fcd04771ae9e8c96"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.637150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"195d441e1e0b4de1b5366cb05f57b46d92bc125e47533dd8f7d8627965bb792f"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.638645 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a2033c9b128ec095e4f06d50629eed0225ce1d1e1b09dab214864751a3ab333d"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.639771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" event={"ID":"24a251d4-749c-4d2d-9fb5-8bd2330d7b35","Type":"ContainerStarted","Data":"3db635b5eb6391d7b30324d79e56adc28f750edb0ea957340677b44450f85002"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.640823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" event={"ID":"6047f06e-4b55-4a39-be9c-6341c8cf7082","Type":"ContainerStarted","Data":"ee508024fdec48965ef3f6309e6db79e04fd22ee752ade8e74b784bf30b646cb"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.642153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" event={"ID":"e31ac6bf-7d41-4b00-a89d-eb64ebc0e9f1","Type":"ContainerStarted","Data":"c5388bcf4fa583403f4c79a0b9a659c1a1afe691bd09496ade8526624204dcd9"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.643665 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" event={"ID":"17854bb6-7bec-4972-92a8-299702642b45","Type":"ContainerStarted","Data":"ccc24c9fc6c1e88c97ffc3ff128ebe4c5113fe5c13249cca730d2138849b43ec"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.645315 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" event={"ID":"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29","Type":"ContainerStarted","Data":"c115ea0247c4f09e5f8b16a2ce2861d8ea082b4bf9117fa56b8b536a1d895b6e"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.647022 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" event={"ID":"1c479290-3870-4f83-b3e6-a86e91bda22e","Type":"ContainerStarted","Data":"7b721c7e8cfb9c3e8262d2be2f0777dc954b014ab392b21940985fc093c290b5"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.649184 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wskr8" event={"ID":"97837bed-3f7c-4bf8-be43-550ea11c0a98","Type":"ContainerStarted","Data":"e8b9bcc8616656af7c8341dca17948ddb94c467ef78f4101c0c3be56aef293d5"} Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.649985 5024 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jzfnc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.650024 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" podUID="a9bd5896-2df2-4367-be33-9891f0bc67aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.650060 5024 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-25rxs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.650119 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.659528 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.659734 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.159700797 +0000 UTC m=+152.235487645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.659831 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.660205 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.160187991 +0000 UTC m=+152.235974899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.698614 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" podStartSLOduration=130.698591241 podStartE2EDuration="2m10.698591241s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:13.680944234 +0000 UTC m=+151.756731102" watchObservedRunningTime="2025-10-07 12:30:13.698591241 +0000 UTC m=+151.774378079" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.718670 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dvg5m" podStartSLOduration=130.718651535 podStartE2EDuration="2m10.718651535s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:13.704119414 +0000 UTC m=+151.779906262" watchObservedRunningTime="2025-10-07 12:30:13.718651535 +0000 UTC m=+151.794438373" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.720705 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.720843 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.762289 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.762505 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.262475305 +0000 UTC m=+152.338262143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.762624 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.764399 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.264388068 +0000 UTC m=+152.340175006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.793473 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795074 5024 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jrgc5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795072 5024 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jrgc5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795174 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" podUID="a9bbb41d-c515-43d3-8e35-a73bed39e840" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795184 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" podUID="a9bbb41d-c515-43d3-8e35-a73bed39e840" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795392 5024 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jrgc5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.795414 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" podUID="a9bbb41d-c515-43d3-8e35-a73bed39e840" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.864792 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.865358 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.365337516 +0000 UTC m=+152.441124364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.933256 5024 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-25rxs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.933317 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Oct 07 12:30:13 crc kubenswrapper[5024]: I1007 12:30:13.966800 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:13 crc kubenswrapper[5024]: E1007 12:30:13.967304 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.467284211 +0000 UTC m=+152.543071069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.003852 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006395 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-xtwlc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006430 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-xtwlc container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006457 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" podUID="c6f06beb-72aa-499a-a760-d36404bca577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006468 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" podUID="c6f06beb-72aa-499a-a760-d36404bca577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006742 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-xtwlc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.006773 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" podUID="c6f06beb-72aa-499a-a760-d36404bca577" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.036318 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr9gr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.036378 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" podUID="306952da-e05e-468a-8e44-5cc64940f7f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.036479 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr9gr container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.036527 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" podUID="306952da-e05e-468a-8e44-5cc64940f7f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.068215 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.068372 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.568351742 +0000 UTC m=+152.644138580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.068553 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.068904 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.568895717 +0000 UTC m=+152.644682555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.169481 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.169989 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.669937037 +0000 UTC m=+152.745724055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.271398 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.272512 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.772491049 +0000 UTC m=+152.848277887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.373866 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.374448 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.874427113 +0000 UTC m=+152.950213951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.401701 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.405583 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:14 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:14 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:14 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.405633 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.476089 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.477567 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:14.9775453 +0000 UTC m=+153.053332138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.478848 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.481665 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fqbxr container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.481726 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.482035 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fqbxr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.482058 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.510749 5024 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jzfnc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.510822 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" podUID="a9bd5896-2df2-4367-be33-9891f0bc67aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.510867 5024 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jzfnc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.510978 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" podUID="a9bd5896-2df2-4367-be33-9891f0bc67aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.577541 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.577901 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.077886161 +0000 UTC m=+153.153672999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.655770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" event={"ID":"d52d4285-2b77-4a88-8f02-add6e0de37ff","Type":"ContainerStarted","Data":"de6b2fde9f0214a6322a3cc3108d59d32fa11f9eb002b26c46971a4f322a567b"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.658129 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" event={"ID":"6bb2be95-3593-4045-8dca-353189946a2f","Type":"ContainerStarted","Data":"5ec08da8808c8ae71a1fd74b04ea2b6f9707d21bea54234cf21cecb2e6e14648"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.658532 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.659433 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" event={"ID":"06b21306-4b72-47d2-b814-9b4b333295bb","Type":"ContainerStarted","Data":"85f3092ac7117212de7c12fa684797e6a030f14be98cff1f3513a882fc7a91c6"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.660905 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" event={"ID":"3096e7a7-c9b4-47ab-8336-b87e49e4521b","Type":"ContainerStarted","Data":"8170ef573277ecfdc40c0fe706195161b51cdd7078128275f1dca23b4008f240"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.663014 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" event={"ID":"1b35c5bc-b3c2-4109-928a-3d1898fdca29","Type":"ContainerStarted","Data":"3740785c5ccd5487148339722f395414ea47870c7f0de41b662589857ffbc775"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.665457 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" event={"ID":"bf226d94-0733-4205-9790-7590b441dac9","Type":"ContainerStarted","Data":"0f0323cc8546d5239b84c3a362061e2f040cc47aeaa379c64a419938f9d8a411"} Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.666171 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.667207 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.667560 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.669548 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fqbxr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.669583 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.677572 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cf5dj" podStartSLOduration=13.677556003 podStartE2EDuration="13.677556003s" podCreationTimestamp="2025-10-07 12:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:13.717439522 +0000 UTC m=+151.793226360" watchObservedRunningTime="2025-10-07 12:30:14.677556003 +0000 UTC m=+152.753342841" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.678559 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p9sts" podStartSLOduration=131.678555191 podStartE2EDuration="2m11.678555191s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.676944816 +0000 UTC m=+152.752731654" watchObservedRunningTime="2025-10-07 12:30:14.678555191 +0000 UTC m=+152.754342029" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.680783 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.681133 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.181120672 +0000 UTC m=+153.256907510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.758588 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wskr8" podStartSLOduration=13.75857415 podStartE2EDuration="13.75857415s" podCreationTimestamp="2025-10-07 12:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.757794289 +0000 UTC m=+152.833581127" watchObservedRunningTime="2025-10-07 12:30:14.75857415 +0000 UTC m=+152.834360988" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.759028 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" podStartSLOduration=131.759022543 podStartE2EDuration="2m11.759022543s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.73898888 +0000 UTC m=+152.814775718" watchObservedRunningTime="2025-10-07 12:30:14.759022543 +0000 UTC m=+152.834809381" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.780217 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fxzgs" podStartSLOduration=131.780198088 podStartE2EDuration="2m11.780198088s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.779876199 +0000 UTC m=+152.855663047" watchObservedRunningTime="2025-10-07 12:30:14.780198088 +0000 UTC m=+152.855984916" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.782012 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.782220 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.282189493 +0000 UTC m=+153.357976331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.784311 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.784762 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.284749143 +0000 UTC m=+153.360535981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.824632 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" podStartSLOduration=131.824610514 podStartE2EDuration="2m11.824610514s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.822824145 +0000 UTC m=+152.898610983" watchObservedRunningTime="2025-10-07 12:30:14.824610514 +0000 UTC m=+152.900397352" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.844657 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vt8qv" podStartSLOduration=131.844636537 podStartE2EDuration="2m11.844636537s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.843500536 +0000 UTC m=+152.919287374" watchObservedRunningTime="2025-10-07 12:30:14.844636537 +0000 UTC m=+152.920423375" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.853630 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.854228 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.855927 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.858633 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.872907 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2p5j7" podStartSLOduration=131.872888927 podStartE2EDuration="2m11.872888927s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.865164674 +0000 UTC m=+152.940951512" watchObservedRunningTime="2025-10-07 12:30:14.872888927 +0000 UTC m=+152.948675765" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.875553 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.885096 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.885395 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.385364162 +0000 UTC m=+153.461151000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.903098 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hdr5j" podStartSLOduration=131.903084411 podStartE2EDuration="2m11.903084411s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.901036824 +0000 UTC m=+152.976823662" watchObservedRunningTime="2025-10-07 12:30:14.903084411 +0000 UTC m=+152.978871249" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.928831 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-frvxs" podStartSLOduration=131.928813941 podStartE2EDuration="2m11.928813941s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.927702371 +0000 UTC m=+153.003489219" watchObservedRunningTime="2025-10-07 12:30:14.928813941 +0000 UTC m=+153.004600779" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.955612 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" podStartSLOduration=131.955576301 podStartE2EDuration="2m11.955576301s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.943399834 +0000 UTC m=+153.019186672" watchObservedRunningTime="2025-10-07 12:30:14.955576301 +0000 UTC m=+153.031363139" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.962653 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-whw6l" podStartSLOduration=131.962630165 podStartE2EDuration="2m11.962630165s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.962597814 +0000 UTC m=+153.038384652" watchObservedRunningTime="2025-10-07 12:30:14.962630165 +0000 UTC m=+153.038416993" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.989557 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.989610 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.989644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:14 crc kubenswrapper[5024]: E1007 12:30:14.989998 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.489983011 +0000 UTC m=+153.565769849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:14 crc kubenswrapper[5024]: I1007 12:30:14.999884 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pxvcj" podStartSLOduration=131.999861003 podStartE2EDuration="2m11.999861003s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:14.98344891 +0000 UTC m=+153.059235758" watchObservedRunningTime="2025-10-07 12:30:14.999861003 +0000 UTC m=+153.075647841" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.020960 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wp2gk" podStartSLOduration=132.020937195 podStartE2EDuration="2m12.020937195s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:15.01747203 +0000 UTC m=+153.093258868" watchObservedRunningTime="2025-10-07 12:30:15.020937195 +0000 UTC m=+153.096724033" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.091264 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.091446 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.591419752 +0000 UTC m=+153.667206590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.091858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.091974 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.092065 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.092120 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.092441 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.592424739 +0000 UTC m=+153.668211577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.121890 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.168864 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.192903 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.193226 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.693211423 +0000 UTC m=+153.768998261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.294589 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.295090 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.795079036 +0000 UTC m=+153.870865874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.396396 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.396928 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.896908368 +0000 UTC m=+153.972695206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.408978 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:15 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:15 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:15 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.409070 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.426034 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.497512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.497824 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:15.997812434 +0000 UTC m=+154.073599272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.599173 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.599381 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.099349988 +0000 UTC m=+154.175136826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.599450 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.599763 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.099749679 +0000 UTC m=+154.175536577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.676623 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" event={"ID":"c7b3c652-baa1-4549-9b0b-974f430b56dd","Type":"ContainerStarted","Data":"fddfd96271030b14263debeb6a5b1990efeed8dd9030ca13905be5aca57ac67a"} Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.682250 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0","Type":"ContainerStarted","Data":"50c440eb6c381d7fdf4e1909bf7371d74872ded8ebad23facd77116339fac845"} Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.691559 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" event={"ID":"a55e7ecf-f2fa-4e64-af0c-c7a0651ded29","Type":"ContainerStarted","Data":"1384c3e71ce4577c91f631768d4f06cab97be326b93f19bd2f840d54b0345d86"} Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.700463 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.700599 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.200580383 +0000 UTC m=+154.276367221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.700737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.701029 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.201021295 +0000 UTC m=+154.276808133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.801948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.802105 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.302082726 +0000 UTC m=+154.377869574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.802215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.802545 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.302538369 +0000 UTC m=+154.378325207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.905452 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.905671 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.405629345 +0000 UTC m=+154.481416183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:15 crc kubenswrapper[5024]: I1007 12:30:15.905742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:15 crc kubenswrapper[5024]: E1007 12:30:15.906115 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.406105048 +0000 UTC m=+154.481891966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.007376 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.007587 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.50755587 +0000 UTC m=+154.583342718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.007695 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.008076 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.508060564 +0000 UTC m=+154.583847402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.109157 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.109302 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.609272509 +0000 UTC m=+154.685059347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.109593 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.109918 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.609910216 +0000 UTC m=+154.685697054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.210259 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.210475 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.710443943 +0000 UTC m=+154.786230791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.210555 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.210936 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.710927216 +0000 UTC m=+154.786714054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.311238 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.311435 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.811410011 +0000 UTC m=+154.887196849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.311551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.311878 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.811867633 +0000 UTC m=+154.887654471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.405736 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:16 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:16 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:16 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.405793 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.412420 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.412539 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.912515443 +0000 UTC m=+154.988302281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.412796 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.413116 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:16.913101629 +0000 UTC m=+154.988888467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.514122 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.514328 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.014279903 +0000 UTC m=+155.090066741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.514683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.515024 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.014993972 +0000 UTC m=+155.090780810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.615746 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.615964 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.11593398 +0000 UTC m=+155.191720818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.616327 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.616786 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.116767463 +0000 UTC m=+155.192554301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.699302 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0","Type":"ContainerStarted","Data":"0dd1ae807a58a517ee94809fd96ab15fd61b5cd0f68bf94b0792ffabde2328d0"} Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.718034 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.718260 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.218227275 +0000 UTC m=+155.294014113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.718324 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.718722 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.218709878 +0000 UTC m=+155.294496716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.726464 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" podStartSLOduration=133.726443231 podStartE2EDuration="2m13.726443231s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:15.746115341 +0000 UTC m=+153.821902179" watchObservedRunningTime="2025-10-07 12:30:16.726443231 +0000 UTC m=+154.802230069" Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.819002 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.819318 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.319300056 +0000 UTC m=+155.395086894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:16 crc kubenswrapper[5024]: I1007 12:30:16.921215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:16 crc kubenswrapper[5024]: E1007 12:30:16.921497 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.421484717 +0000 UTC m=+155.497271555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.021920 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.022120 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.522094426 +0000 UTC m=+155.597881264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.022243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.022525 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.522511567 +0000 UTC m=+155.598298405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.123836 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.124095 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.624048421 +0000 UTC m=+155.699835259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.124223 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.124561 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.624543785 +0000 UTC m=+155.700330633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.225068 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.225302 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.725273766 +0000 UTC m=+155.801060604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.225511 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.225878 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.725863812 +0000 UTC m=+155.801650650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.326600 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.326773 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.826747258 +0000 UTC m=+155.902534086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.326988 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.327350 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.827334215 +0000 UTC m=+155.903121053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.407529 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:17 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:17 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:17 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.407596 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.428467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.428640 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.928607451 +0000 UTC m=+156.004394289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.428723 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.429103 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:17.929094974 +0000 UTC m=+156.004881802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.530609 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.530873 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.030848014 +0000 UTC m=+156.106634852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.632381 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.632766 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.132748098 +0000 UTC m=+156.208534936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.705172 5024 generic.go:334] "Generic (PLEG): container finished" podID="e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" containerID="0dd1ae807a58a517ee94809fd96ab15fd61b5cd0f68bf94b0792ffabde2328d0" exitCode=0 Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.705241 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0","Type":"ContainerDied","Data":"0dd1ae807a58a517ee94809fd96ab15fd61b5cd0f68bf94b0792ffabde2328d0"} Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.733838 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.734033 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.234003154 +0000 UTC m=+156.309790002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.734474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.734807 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.234790886 +0000 UTC m=+156.310577724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.838149 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.838531 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.33850638 +0000 UTC m=+156.414293208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.838678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.839019 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.339005094 +0000 UTC m=+156.414791922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.899065 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.899656 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.901992 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.902700 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.913766 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.939527 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.939694 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.439667492 +0000 UTC m=+156.515454340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:17 crc kubenswrapper[5024]: I1007 12:30:17.939743 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:17 crc kubenswrapper[5024]: E1007 12:30:17.940375 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.440365142 +0000 UTC m=+156.516151980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.040768 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.040924 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.540900998 +0000 UTC m=+156.616687836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.040988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.041072 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.041118 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.041369 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.541358301 +0000 UTC m=+156.617145149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.070299 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.071563 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.074785 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.103006 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.142396 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.142503 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.642484873 +0000 UTC m=+156.718271711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.142623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.142678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.142708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.142758 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.143033 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.643022568 +0000 UTC m=+156.718809406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.179260 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.216833 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.243783 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.243960 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.743934295 +0000 UTC m=+156.819721133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.244010 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.244074 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.244230 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.244270 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgch\" (UniqueName: \"kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.244338 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.744326995 +0000 UTC m=+156.820113833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.265028 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.266430 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.268867 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.280359 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.280449 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.280468 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.286305 5024 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bdgps container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.286372 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" podUID="a55e7ecf-f2fa-4e64-af0c-c7a0651ded29" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.345753 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.345941 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.845914691 +0000 UTC m=+156.921701529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346347 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346366 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgch\" (UniqueName: \"kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346390 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.346642 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.846634411 +0000 UTC m=+156.922421249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346817 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.346917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.379967 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgch\" (UniqueName: \"kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch\") pod \"community-operators-hwthk\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.386442 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.411545 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:18 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:18 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:18 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.411592 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.447575 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.447747 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.947718962 +0000 UTC m=+157.023505800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.447792 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.447887 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nqqh\" (UniqueName: \"kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.447932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.447955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.449431 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:18.949423869 +0000 UTC m=+157.025210707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.476040 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.477227 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.515778 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.529788 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.530770 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.554152 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.554469 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.554503 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nqqh\" (UniqueName: \"kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.554532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.554916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.554998 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.054982694 +0000 UTC m=+157.130769522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.555216 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.563318 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.586807 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.596173 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nqqh\" (UniqueName: \"kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh\") pod \"certified-operators-rsgpt\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.656328 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.656417 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.656487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lpf\" (UniqueName: \"kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.656520 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.658603 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.158584155 +0000 UTC m=+157.234370993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.666526 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.667414 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.685285 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.717521 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3f7138cb-78ee-4467-be40-7d97cf6ed0e9","Type":"ContainerStarted","Data":"8428ff8b92f1bfa412755b6d9797b26a5d989d5197d1b99215c0951aa6eaad39"} Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.736190 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ml5hz" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.767181 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.767729 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.767801 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.767856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49lpf\" (UniqueName: \"kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.768765 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.268740407 +0000 UTC m=+157.344527245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.769218 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.769560 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.815693 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49lpf\" (UniqueName: \"kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf\") pod \"community-operators-fkrsk\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.825169 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.872026 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.872078 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsr27\" (UniqueName: \"kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.872129 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.872283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.872541 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.372528663 +0000 UTC m=+157.448315501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.884153 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.974165 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.974494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.974521 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsr27\" (UniqueName: \"kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.974571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.975070 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:18 crc kubenswrapper[5024]: E1007 12:30:18.975187 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.475168107 +0000 UTC m=+157.550954945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:18 crc kubenswrapper[5024]: I1007 12:30:18.975504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.000792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsr27\" (UniqueName: \"kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27\") pod \"certified-operators-gk8sd\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.081332 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.081661 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.581648477 +0000 UTC m=+157.657435315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.182265 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.182425 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.682388809 +0000 UTC m=+157.758175647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.182479 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.182827 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.682815431 +0000 UTC m=+157.758602269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.187727 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.189848 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.287674 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.288198 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.288251 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access\") pod \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.288301 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir\") pod \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\" (UID: \"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0\") " Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.288539 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" (UID: "e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.288563 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.78854045 +0000 UTC m=+157.864327278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.293338 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" (UID: "e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.378875 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hflv2" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.400775 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:19.900754229 +0000 UTC m=+157.976541067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.394063 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.401835 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.401852 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.419059 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:19 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:19 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:19 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.419127 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.439872 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.514052 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.515366 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.015348233 +0000 UTC m=+158.091135071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.553440 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wskr8" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.565511 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.618859 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.619285 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.119268903 +0000 UTC m=+158.195055741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.722531 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.722795 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.222775291 +0000 UTC m=+158.298562129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.722898 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.723288 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.223278665 +0000 UTC m=+158.299065513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.738359 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.739859 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0","Type":"ContainerDied","Data":"50c440eb6c381d7fdf4e1909bf7371d74872ded8ebad23facd77116339fac845"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.739897 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50c440eb6c381d7fdf4e1909bf7371d74872ded8ebad23facd77116339fac845" Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.747525 5024 generic.go:334] "Generic (PLEG): container finished" podID="0532ff61-84a7-44b0-b8d3-d6ffad413de5" containerID="bcddc5806317dd060af7266f263935171470854340ba0a07b4d171f5ff7f5e68" exitCode=0 Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.747608 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" event={"ID":"0532ff61-84a7-44b0-b8d3-d6ffad413de5","Type":"ContainerDied","Data":"bcddc5806317dd060af7266f263935171470854340ba0a07b4d171f5ff7f5e68"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.759648 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3f7138cb-78ee-4467-be40-7d97cf6ed0e9","Type":"ContainerStarted","Data":"09d177a07d3de84d86dabded8f00198c1780753812bfa8b1b43764818187103a"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.772784 5024 generic.go:334] "Generic (PLEG): container finished" podID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerID="41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331" exitCode=0 Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.772886 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerDied","Data":"41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.772919 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerStarted","Data":"d6965c8e65070a849d334a70c8301c74586c6ea0e63ab5704a5d12f870ad09d4"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.774808 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.775408 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerStarted","Data":"db302175ef30feb270c4c6acd33ff1eb9f46c42a0191a47d8e0cbcca017472be"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.777774 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerStarted","Data":"4370f3ccc9cbf08cfa6be96a0b47f3c3aec7c60f8af0b7c0f2465b5c3c37287c"} Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.783122 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.793573 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.793552626 podStartE2EDuration="2.793552626s" podCreationTimestamp="2025-10-07 12:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:19.792685442 +0000 UTC m=+157.868472280" watchObservedRunningTime="2025-10-07 12:30:19.793552626 +0000 UTC m=+157.869339464" Oct 07 12:30:19 crc kubenswrapper[5024]: W1007 12:30:19.803316 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc12e98a2_133d_411c_b54a_e303efbe8889.slice/crio-8cd860d9fffe7a40a298e82c631c9fab2ff1407eb68deead0e74b5672de56080 WatchSource:0}: Error finding container 8cd860d9fffe7a40a298e82c631c9fab2ff1407eb68deead0e74b5672de56080: Status 404 returned error can't find the container with id 8cd860d9fffe7a40a298e82c631c9fab2ff1407eb68deead0e74b5672de56080 Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.829659 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.830760 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.330745733 +0000 UTC m=+158.406532571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:19 crc kubenswrapper[5024]: I1007 12:30:19.930795 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:19 crc kubenswrapper[5024]: E1007 12:30:19.931203 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.431187066 +0000 UTC m=+158.506973904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.031249 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.031381 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.531355573 +0000 UTC m=+158.607142411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.031758 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.032070 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.532053142 +0000 UTC m=+158.607840030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.132338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.132498 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.632474435 +0000 UTC m=+158.708261273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.132654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.133024 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.63300923 +0000 UTC m=+158.708796068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.233439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.233765 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.733743271 +0000 UTC m=+158.809530109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.267641 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.267877 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" containerName="pruner" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.267905 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" containerName="pruner" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.268003 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6992827-a8c1-4ac3-b9a0-b1f55fad8ee0" containerName="pruner" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.268730 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.271899 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.287034 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.334323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.334375 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.334398 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.334435 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwd5\" (UniqueName: \"kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.334769 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.834753091 +0000 UTC m=+158.910539939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.406599 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:20 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:20 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:20 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.406655 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.435493 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.435684 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.935655537 +0000 UTC m=+159.011442385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.435756 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.435828 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.435865 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.435927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwd5\" (UniqueName: \"kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.436101 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:20.936092109 +0000 UTC m=+159.011879007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.436353 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.436420 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.457566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwd5\" (UniqueName: \"kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5\") pod \"redhat-marketplace-dsmzn\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.537065 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.537267 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.037241512 +0000 UTC m=+159.113028350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.537347 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.537672 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.037665294 +0000 UTC m=+159.113452172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.604189 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.643223 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.643971 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.143955659 +0000 UTC m=+159.219742497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.671205 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.672391 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.690072 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.745547 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.745923 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.245907214 +0000 UTC m=+159.321694052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.746352 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mln5w\" (UniqueName: \"kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.746426 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.746454 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.792867 5024 generic.go:334] "Generic (PLEG): container finished" podID="c12e98a2-133d-411c-b54a-e303efbe8889" containerID="b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169" exitCode=0 Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.792969 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerDied","Data":"b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.793006 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerStarted","Data":"8cd860d9fffe7a40a298e82c631c9fab2ff1407eb68deead0e74b5672de56080"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.795954 5024 generic.go:334] "Generic (PLEG): container finished" podID="3f7138cb-78ee-4467-be40-7d97cf6ed0e9" containerID="09d177a07d3de84d86dabded8f00198c1780753812bfa8b1b43764818187103a" exitCode=0 Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.796044 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3f7138cb-78ee-4467-be40-7d97cf6ed0e9","Type":"ContainerDied","Data":"09d177a07d3de84d86dabded8f00198c1780753812bfa8b1b43764818187103a"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.799806 5024 generic.go:334] "Generic (PLEG): container finished" podID="b060128a-9755-499a-b0f4-d9fc67649e66" containerID="ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864" exitCode=0 Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.800131 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerDied","Data":"ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.805122 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" event={"ID":"c7b3c652-baa1-4549-9b0b-974f430b56dd","Type":"ContainerStarted","Data":"d88e3373df6c57841da2c204a5eb4603b3520c9a03ef16f072db5125c7d9dee2"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.808919 5024 generic.go:334] "Generic (PLEG): container finished" podID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerID="c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645" exitCode=0 Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.809228 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerDied","Data":"c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645"} Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.847307 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.847513 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.847588 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.347558841 +0000 UTC m=+159.423345699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.847673 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.847925 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.847957 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.848073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mln5w\" (UniqueName: \"kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.848857 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.348840137 +0000 UTC m=+159.424627065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.848860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.867321 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mln5w\" (UniqueName: \"kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w\") pod \"redhat-marketplace-2blfn\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:20 crc kubenswrapper[5024]: I1007 12:30:20.948650 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:20 crc kubenswrapper[5024]: E1007 12:30:20.948843 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.448818107 +0000 UTC m=+159.524604945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.001852 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.009081 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.049289 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nztt6\" (UniqueName: \"kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6\") pod \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.049326 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume\") pod \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.049505 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume\") pod \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\" (UID: \"0532ff61-84a7-44b0-b8d3-d6ffad413de5\") " Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.049634 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.049886 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.549874718 +0000 UTC m=+159.625661556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.051148 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume" (OuterVolumeSpecName: "config-volume") pod "0532ff61-84a7-44b0-b8d3-d6ffad413de5" (UID: "0532ff61-84a7-44b0-b8d3-d6ffad413de5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.052844 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6" (OuterVolumeSpecName: "kube-api-access-nztt6") pod "0532ff61-84a7-44b0-b8d3-d6ffad413de5" (UID: "0532ff61-84a7-44b0-b8d3-d6ffad413de5"). InnerVolumeSpecName "kube-api-access-nztt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.053244 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0532ff61-84a7-44b0-b8d3-d6ffad413de5" (UID: "0532ff61-84a7-44b0-b8d3-d6ffad413de5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.146972 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.150207 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.150388 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.650357803 +0000 UTC m=+159.726144641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.150574 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.150765 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0532ff61-84a7-44b0-b8d3-d6ffad413de5-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.150785 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0532ff61-84a7-44b0-b8d3-d6ffad413de5-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.150798 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nztt6\" (UniqueName: \"kubernetes.io/projected/0532ff61-84a7-44b0-b8d3-d6ffad413de5-kube-api-access-nztt6\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.150928 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.650912718 +0000 UTC m=+159.726699606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.226952 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:30:21 crc kubenswrapper[5024]: W1007 12:30:21.233055 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b37a99d_f44f_4644_8ed5_f8bfaf4f13a0.slice/crio-58cfc1b95b7e06de6911121ea31959fed11cf68a9f00f650619bd6c631c25869 WatchSource:0}: Error finding container 58cfc1b95b7e06de6911121ea31959fed11cf68a9f00f650619bd6c631c25869: Status 404 returned error can't find the container with id 58cfc1b95b7e06de6911121ea31959fed11cf68a9f00f650619bd6c631c25869 Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.251305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.251497 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.751457614 +0000 UTC m=+159.827244452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.251808 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.252323 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.752313118 +0000 UTC m=+159.828099956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.255156 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.256102 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0532ff61-84a7-44b0-b8d3-d6ffad413de5" containerName="collect-profiles" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.256116 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0532ff61-84a7-44b0-b8d3-d6ffad413de5" containerName="collect-profiles" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.256341 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0532ff61-84a7-44b0-b8d3-d6ffad413de5" containerName="collect-profiles" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.257718 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.261086 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.270478 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.353403 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.353578 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.853555244 +0000 UTC m=+159.929342082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.353722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.354313 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.854292474 +0000 UTC m=+159.930079372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.404372 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:21 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:21 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:21 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.404431 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.458541 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.458826 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.458895 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xz64\" (UniqueName: \"kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.458997 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.459655 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.959605542 +0000 UTC m=+160.035392380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.459772 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.460341 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:21.960330232 +0000 UTC m=+160.036117070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.561291 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.561534 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.061497965 +0000 UTC m=+160.137284823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.561877 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.561918 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.561956 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.561981 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz64\" (UniqueName: \"kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.562397 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.062382299 +0000 UTC m=+160.138169157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.578400 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz64\" (UniqueName: \"kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.654647 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.655866 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.662908 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.663010 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.162993988 +0000 UTC m=+160.238780826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.663227 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.663288 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjfn6\" (UniqueName: \"kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.663332 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.663389 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.663689 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.163682527 +0000 UTC m=+160.239469365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.665675 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.726182 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.726216 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content\") pod \"redhat-operators-w84bt\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.764296 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.764498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.764536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjfn6\" (UniqueName: \"kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.764565 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.764979 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.765047 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.265033535 +0000 UTC m=+160.340820373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.765267 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.781886 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjfn6\" (UniqueName: \"kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6\") pod \"redhat-operators-tw9jc\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.814776 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerStarted","Data":"58cfc1b95b7e06de6911121ea31959fed11cf68a9f00f650619bd6c631c25869"} Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.815876 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerStarted","Data":"0adb28aca3ddd823f1b028560b1f997e7f4f91b018c5f55eb1b72c25f7a7512d"} Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.817648 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" event={"ID":"c7b3c652-baa1-4549-9b0b-974f430b56dd","Type":"ContainerStarted","Data":"79ef6f59ed5197645c03eacab52d70ab5078927fa4dd2f90037a5a1b38343ff6"} Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.819279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" event={"ID":"0532ff61-84a7-44b0-b8d3-d6ffad413de5","Type":"ContainerDied","Data":"675972bd8f68b13c027c9946fbbb3cca77bcd99c7421beab2de23179963667ff"} Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.819321 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675972bd8f68b13c027c9946fbbb3cca77bcd99c7421beab2de23179963667ff" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.819359 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.865976 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.866631 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.36660281 +0000 UTC m=+160.442389648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.887023 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.969593 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:21 crc kubenswrapper[5024]: E1007 12:30:21.969967 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.469941884 +0000 UTC m=+160.545728742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:21 crc kubenswrapper[5024]: I1007 12:30:21.972894 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.056178 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.070283 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access\") pod \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.070585 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir\") pod \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\" (UID: \"3f7138cb-78ee-4467-be40-7d97cf6ed0e9\") " Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.070774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.071106 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.571089187 +0000 UTC m=+160.646876035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.072776 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f7138cb-78ee-4467-be40-7d97cf6ed0e9" (UID: "3f7138cb-78ee-4467-be40-7d97cf6ed0e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.075709 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f7138cb-78ee-4467-be40-7d97cf6ed0e9" (UID: "3f7138cb-78ee-4467-be40-7d97cf6ed0e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.089289 5024 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.171447 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.171650 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.671620483 +0000 UTC m=+160.747407331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.171931 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.171948 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f7138cb-78ee-4467-be40-7d97cf6ed0e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.187441 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:30:22 crc kubenswrapper[5024]: W1007 12:30:22.222526 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8805154_72fd_434b_88f8_9cb3ca239aa9.slice/crio-0423bbacdace5cfff49ed1305a96553f49f859e68ba8c330082cdad5d46c6fbb WatchSource:0}: Error finding container 0423bbacdace5cfff49ed1305a96553f49f859e68ba8c330082cdad5d46c6fbb: Status 404 returned error can't find the container with id 0423bbacdace5cfff49ed1305a96553f49f859e68ba8c330082cdad5d46c6fbb Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.272421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.272841 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.772829078 +0000 UTC m=+160.848615916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.279537 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.373187 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.373470 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.873448196 +0000 UTC m=+160.949235034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.373523 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.374061 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.874046523 +0000 UTC m=+160.949833361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.405732 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:22 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:22 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:22 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.405806 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.474498 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.474628 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.97460951 +0000 UTC m=+161.050396348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.474733 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.475038 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:22.975026761 +0000 UTC m=+161.050813599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.575525 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.575663 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.07564919 +0000 UTC m=+161.151436028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.575764 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.576031 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.07602416 +0000 UTC m=+161.151810988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.676577 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.676751 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.176729161 +0000 UTC m=+161.252515999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.676882 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.677262 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.177254606 +0000 UTC m=+161.253041444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.777796 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.778386 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.278337307 +0000 UTC m=+161.354124185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.779664 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.780176 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.280121256 +0000 UTC m=+161.355908094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.838527 5024 generic.go:334] "Generic (PLEG): container finished" podID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerID="d697385c5b833453263ac33253cdd70995d9e0178d7d279fc9dfd02399c63170" exitCode=0 Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.838745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerDied","Data":"d697385c5b833453263ac33253cdd70995d9e0178d7d279fc9dfd02399c63170"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.839017 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerStarted","Data":"e61c997c2c384b9b0972ca198cbe5d2daccb963865dcc0584d89e6e6107b00b3"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.843834 5024 generic.go:334] "Generic (PLEG): container finished" podID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerID="c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116" exitCode=0 Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.843884 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerDied","Data":"c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.846898 5024 generic.go:334] "Generic (PLEG): container finished" podID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerID="98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516" exitCode=0 Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.846958 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerDied","Data":"98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.852951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" event={"ID":"c7b3c652-baa1-4549-9b0b-974f430b56dd","Type":"ContainerStarted","Data":"3a409476d82a54c56b761cd96c1af34e96456fb9e462e80a1873a3225afacb05"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.855357 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerStarted","Data":"0423bbacdace5cfff49ed1305a96553f49f859e68ba8c330082cdad5d46c6fbb"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.868793 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3f7138cb-78ee-4467-be40-7d97cf6ed0e9","Type":"ContainerDied","Data":"8428ff8b92f1bfa412755b6d9797b26a5d989d5197d1b99215c0951aa6eaad39"} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.868857 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8428ff8b92f1bfa412755b6d9797b26a5d989d5197d1b99215c0951aa6eaad39" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.868857 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.883100 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.883194 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.383177952 +0000 UTC m=+161.458964780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.883970 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:22 crc kubenswrapper[5024]: E1007 12:30:22.884491 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-07 12:30:23.384483528 +0000 UTC m=+161.460270366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xs54z" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.917751 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9qlzn" podStartSLOduration=21.917735016 podStartE2EDuration="21.917735016s" podCreationTimestamp="2025-10-07 12:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:22.91426182 +0000 UTC m=+160.990048658" watchObservedRunningTime="2025-10-07 12:30:22.917735016 +0000 UTC m=+160.993521854" Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.957336 5024 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-07T12:30:22.08931857Z","Handler":null,"Name":""} Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.969905 5024 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.969946 5024 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.987475 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 07 12:30:22 crc kubenswrapper[5024]: I1007 12:30:22.992395 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.088721 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.091714 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.091752 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.143257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xs54z\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.286744 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.291541 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bdgps" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.321953 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.404316 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:23 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:23 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:23 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.404803 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.461574 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.461622 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.461593 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.462013 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.486199 5024 patch_prober.go:28] interesting pod/console-f9d7485db-9t72d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.486282 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9t72d" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.617755 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:30:23 crc kubenswrapper[5024]: W1007 12:30:23.631674 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f471f75_4e1e_4093_975b_e02e2b7f8b32.slice/crio-42760d67ce9f12f9032f8328ca7299dc36acbbefde68fef75e1b84c438f1fa22 WatchSource:0}: Error finding container 42760d67ce9f12f9032f8328ca7299dc36acbbefde68fef75e1b84c438f1fa22: Status 404 returned error can't find the container with id 42760d67ce9f12f9032f8328ca7299dc36acbbefde68fef75e1b84c438f1fa22 Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.797944 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jrgc5" Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.876491 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerID="80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463" exitCode=0 Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.876561 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerDied","Data":"80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463"} Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.878289 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" event={"ID":"1f471f75-4e1e-4093-975b-e02e2b7f8b32","Type":"ContainerStarted","Data":"42760d67ce9f12f9032f8328ca7299dc36acbbefde68fef75e1b84c438f1fa22"} Oct 07 12:30:23 crc kubenswrapper[5024]: I1007 12:30:23.937436 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.008552 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xtwlc" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.048170 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr9gr" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.404960 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:24 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:24 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:24 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.405244 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.491424 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.517566 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzfnc" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.760189 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.885150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" event={"ID":"1f471f75-4e1e-4093-975b-e02e2b7f8b32","Type":"ContainerStarted","Data":"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b"} Oct 07 12:30:24 crc kubenswrapper[5024]: I1007 12:30:24.885880 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:25 crc kubenswrapper[5024]: I1007 12:30:25.406569 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:25 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:25 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:25 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:25 crc kubenswrapper[5024]: I1007 12:30:25.406662 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:26 crc kubenswrapper[5024]: I1007 12:30:26.354611 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:30:26 crc kubenswrapper[5024]: I1007 12:30:26.361355 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac027a0c-8461-4ea2-9a6e-40b4af6721b9-metrics-certs\") pod \"network-metrics-daemon-gtmmn\" (UID: \"ac027a0c-8461-4ea2-9a6e-40b4af6721b9\") " pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:30:26 crc kubenswrapper[5024]: I1007 12:30:26.404361 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:26 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:26 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:26 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:26 crc kubenswrapper[5024]: I1007 12:30:26.404478 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:26 crc kubenswrapper[5024]: I1007 12:30:26.467679 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gtmmn" Oct 07 12:30:27 crc kubenswrapper[5024]: I1007 12:30:27.404645 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:27 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:27 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:27 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:27 crc kubenswrapper[5024]: I1007 12:30:27.404702 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:28 crc kubenswrapper[5024]: I1007 12:30:28.404477 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:28 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:28 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:28 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:28 crc kubenswrapper[5024]: I1007 12:30:28.404562 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:29 crc kubenswrapper[5024]: I1007 12:30:29.407404 5024 patch_prober.go:28] interesting pod/router-default-5444994796-l4pwr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 07 12:30:29 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Oct 07 12:30:29 crc kubenswrapper[5024]: [+]process-running ok Oct 07 12:30:29 crc kubenswrapper[5024]: healthz check failed Oct 07 12:30:29 crc kubenswrapper[5024]: I1007 12:30:29.407473 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4pwr" podUID="b3ccf7f5-1756-4f98-8b76-fe7f9ae77075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 07 12:30:30 crc kubenswrapper[5024]: I1007 12:30:30.404910 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:30 crc kubenswrapper[5024]: I1007 12:30:30.406972 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-l4pwr" Oct 07 12:30:30 crc kubenswrapper[5024]: I1007 12:30:30.424085 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" podStartSLOduration=147.424068655 podStartE2EDuration="2m27.424068655s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:30:24.922336411 +0000 UTC m=+162.998123249" watchObservedRunningTime="2025-10-07 12:30:30.424068655 +0000 UTC m=+168.499855493" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461117 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461123 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461194 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461211 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461259 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461681 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461713 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461693 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e"} pod="openshift-console/downloads-7954f5f757-wh6d2" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.461777 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" containerID="cri-o://b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e" gracePeriod=2 Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.490709 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:33 crc kubenswrapper[5024]: I1007 12:30:33.494225 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:30:35 crc kubenswrapper[5024]: I1007 12:30:35.961922 5024 generic.go:334] "Generic (PLEG): container finished" podID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerID="b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e" exitCode=0 Oct 07 12:30:35 crc kubenswrapper[5024]: I1007 12:30:35.962000 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerDied","Data":"b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e"} Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.332345 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.462205 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.462259 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.719863 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.719919 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:30:43 crc kubenswrapper[5024]: I1007 12:30:43.769876 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6t8l5" Oct 07 12:30:50 crc kubenswrapper[5024]: I1007 12:30:50.909651 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 07 12:30:52 crc kubenswrapper[5024]: E1007 12:30:52.130720 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 07 12:30:52 crc kubenswrapper[5024]: E1007 12:30:52.130949 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrgch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hwthk_openshift-marketplace(0007f6bb-883d-4bb8-b3ee-4c37095c342d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:30:52 crc kubenswrapper[5024]: E1007 12:30:52.132170 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hwthk" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" Oct 07 12:30:53 crc kubenswrapper[5024]: I1007 12:30:53.462160 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:30:53 crc kubenswrapper[5024]: I1007 12:30:53.462515 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:30:53 crc kubenswrapper[5024]: E1007 12:30:53.636352 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hwthk" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" Oct 07 12:30:53 crc kubenswrapper[5024]: E1007 12:30:53.737664 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 07 12:30:53 crc kubenswrapper[5024]: E1007 12:30:53.737843 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49lpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fkrsk_openshift-marketplace(b060128a-9755-499a-b0f4-d9fc67649e66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:30:53 crc kubenswrapper[5024]: E1007 12:30:53.739059 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fkrsk" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.031399 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fkrsk" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.038510 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.038794 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nwd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dsmzn_openshift-marketplace(8af207aa-2798-477a-80e7-d8c7377fa8f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2\": context canceled" logger="UnhandledError" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.040151 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:afa798da1eea334bab3cb1e14451ff84f98d35b436cdc4b408b46e289e4e2bc2\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-dsmzn" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.063386 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1720987142/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.063585 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xz64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w84bt_openshift-marketplace(e8805154-72fd-434b-88f8-9cb3ca239aa9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1720987142/2\": happened during read: context canceled" logger="UnhandledError" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.066803 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1720987142/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-w84bt" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.143855 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.144029 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsr27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gk8sd_openshift-marketplace(c12e98a2-133d-411c-b54a-e303efbe8889): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.145244 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gk8sd" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.159739 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.159936 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nqqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rsgpt_openshift-marketplace(1080756c-912c-4750-b8b3-df0cc6e623f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:30:55 crc kubenswrapper[5024]: E1007 12:30:55.161157 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rsgpt" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" Oct 07 12:30:58 crc kubenswrapper[5024]: E1007 12:30:58.006921 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rsgpt" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" Oct 07 12:30:58 crc kubenswrapper[5024]: E1007 12:30:58.006943 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dsmzn" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" Oct 07 12:30:58 crc kubenswrapper[5024]: E1007 12:30:58.007454 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-w84bt" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" Oct 07 12:30:58 crc kubenswrapper[5024]: E1007 12:30:58.007595 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gk8sd" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" Oct 07 12:30:58 crc kubenswrapper[5024]: I1007 12:30:58.377376 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gtmmn"] Oct 07 12:30:58 crc kubenswrapper[5024]: W1007 12:30:58.913791 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac027a0c_8461_4ea2_9a6e_40b4af6721b9.slice/crio-c934b8bd5967d07dc4b07674286688194fa52b4accc6ddd56a55eede33dcd041 WatchSource:0}: Error finding container c934b8bd5967d07dc4b07674286688194fa52b4accc6ddd56a55eede33dcd041: Status 404 returned error can't find the container with id c934b8bd5967d07dc4b07674286688194fa52b4accc6ddd56a55eede33dcd041 Oct 07 12:30:59 crc kubenswrapper[5024]: I1007 12:30:59.072586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" event={"ID":"ac027a0c-8461-4ea2-9a6e-40b4af6721b9","Type":"ContainerStarted","Data":"c934b8bd5967d07dc4b07674286688194fa52b4accc6ddd56a55eede33dcd041"} Oct 07 12:30:59 crc kubenswrapper[5024]: E1007 12:30:59.349290 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 07 12:30:59 crc kubenswrapper[5024]: E1007 12:30:59.349724 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mln5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2blfn_openshift-marketplace(9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:30:59 crc kubenswrapper[5024]: E1007 12:30:59.350866 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2blfn" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" Oct 07 12:31:02 crc kubenswrapper[5024]: E1007 12:31:02.068508 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2blfn" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.092899 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerStarted","Data":"dc8f49cc9552a94651882c54fc7fdcf9586fb90c73796c46823492707e615731"} Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.094239 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.094315 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.094340 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.097107 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" event={"ID":"ac027a0c-8461-4ea2-9a6e-40b4af6721b9","Type":"ContainerStarted","Data":"db916dd36108e0873e16063e45a46ac205dd02840a55b5146df37928d50f51a1"} Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.461637 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.462019 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.461829 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:03 crc kubenswrapper[5024]: I1007 12:31:03.462082 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:04 crc kubenswrapper[5024]: I1007 12:31:04.104225 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerStarted","Data":"3120728c8609d6dfe46524a49b487061851d36739df252085606a0d2bfcea01c"} Oct 07 12:31:04 crc kubenswrapper[5024]: I1007 12:31:04.104969 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:04 crc kubenswrapper[5024]: I1007 12:31:04.105064 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.114246 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gtmmn" event={"ID":"ac027a0c-8461-4ea2-9a6e-40b4af6721b9","Type":"ContainerStarted","Data":"2674510d106ed39bcd8a8e8f27be19120510bf431d276570f09131b7eafd1718"} Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.117633 5024 generic.go:334] "Generic (PLEG): container finished" podID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerID="3120728c8609d6dfe46524a49b487061851d36739df252085606a0d2bfcea01c" exitCode=0 Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.117692 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerDied","Data":"3120728c8609d6dfe46524a49b487061851d36739df252085606a0d2bfcea01c"} Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.118439 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.118481 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:05 crc kubenswrapper[5024]: I1007 12:31:05.143764 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gtmmn" podStartSLOduration=182.143736881 podStartE2EDuration="3m2.143736881s" podCreationTimestamp="2025-10-07 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:31:05.137290013 +0000 UTC m=+203.213076871" watchObservedRunningTime="2025-10-07 12:31:05.143736881 +0000 UTC m=+203.219523719" Oct 07 12:31:11 crc kubenswrapper[5024]: I1007 12:31:11.163223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerStarted","Data":"ddc42d836712f9eefd34ba91d6ad060090e2b7fe456407cc2da0bbcaf55f01e2"} Oct 07 12:31:12 crc kubenswrapper[5024]: I1007 12:31:12.182584 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tw9jc" podStartSLOduration=4.507570102 podStartE2EDuration="51.1825676s" podCreationTimestamp="2025-10-07 12:30:21 +0000 UTC" firstStartedPulling="2025-10-07 12:30:22.841527282 +0000 UTC m=+160.917314120" lastFinishedPulling="2025-10-07 12:31:09.51652477 +0000 UTC m=+207.592311618" observedRunningTime="2025-10-07 12:31:12.181210672 +0000 UTC m=+210.256997520" watchObservedRunningTime="2025-10-07 12:31:12.1825676 +0000 UTC m=+210.258354438" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.461845 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.462268 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.461845 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.462485 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.720251 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.720343 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.720425 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.721401 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:31:13 crc kubenswrapper[5024]: I1007 12:31:13.721515 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4" gracePeriod=600 Oct 07 12:31:15 crc kubenswrapper[5024]: I1007 12:31:15.183294 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4" exitCode=0 Oct 07 12:31:15 crc kubenswrapper[5024]: I1007 12:31:15.183452 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4"} Oct 07 12:31:21 crc kubenswrapper[5024]: I1007 12:31:21.974013 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:21 crc kubenswrapper[5024]: I1007 12:31:21.974604 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:22 crc kubenswrapper[5024]: I1007 12:31:22.692504 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:22 crc kubenswrapper[5024]: I1007 12:31:22.772391 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:22 crc kubenswrapper[5024]: I1007 12:31:22.922088 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.461805 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.461860 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.461885 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.461916 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.461948 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.462609 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dc8f49cc9552a94651882c54fc7fdcf9586fb90c73796c46823492707e615731"} pod="openshift-console/downloads-7954f5f757-wh6d2" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.462654 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" containerID="cri-o://dc8f49cc9552a94651882c54fc7fdcf9586fb90c73796c46823492707e615731" gracePeriod=2 Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.463190 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:23 crc kubenswrapper[5024]: I1007 12:31:23.463225 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:24 crc kubenswrapper[5024]: I1007 12:31:24.230131 5024 generic.go:334] "Generic (PLEG): container finished" podID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerID="dc8f49cc9552a94651882c54fc7fdcf9586fb90c73796c46823492707e615731" exitCode=0 Oct 07 12:31:24 crc kubenswrapper[5024]: I1007 12:31:24.230222 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerDied","Data":"dc8f49cc9552a94651882c54fc7fdcf9586fb90c73796c46823492707e615731"} Oct 07 12:31:24 crc kubenswrapper[5024]: I1007 12:31:24.230795 5024 scope.go:117] "RemoveContainer" containerID="b145afeedf4f3838aff7d105c5701317fd1fafd89ec71f44ce7dd875bb4b839e" Oct 07 12:31:24 crc kubenswrapper[5024]: I1007 12:31:24.231024 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tw9jc" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="registry-server" containerID="cri-o://ddc42d836712f9eefd34ba91d6ad060090e2b7fe456407cc2da0bbcaf55f01e2" gracePeriod=2 Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.242964 5024 generic.go:334] "Generic (PLEG): container finished" podID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerID="ddc42d836712f9eefd34ba91d6ad060090e2b7fe456407cc2da0bbcaf55f01e2" exitCode=0 Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.243045 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerDied","Data":"ddc42d836712f9eefd34ba91d6ad060090e2b7fe456407cc2da0bbcaf55f01e2"} Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.413604 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.500671 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities\") pod \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.500736 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content\") pod \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.500805 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjfn6\" (UniqueName: \"kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6\") pod \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\" (UID: \"4475b87f-7d9c-4a7d-aa85-fcce45d805ae\") " Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.501938 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities" (OuterVolumeSpecName: "utilities") pod "4475b87f-7d9c-4a7d-aa85-fcce45d805ae" (UID: "4475b87f-7d9c-4a7d-aa85-fcce45d805ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.506965 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6" (OuterVolumeSpecName: "kube-api-access-vjfn6") pod "4475b87f-7d9c-4a7d-aa85-fcce45d805ae" (UID: "4475b87f-7d9c-4a7d-aa85-fcce45d805ae"). InnerVolumeSpecName "kube-api-access-vjfn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.601821 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjfn6\" (UniqueName: \"kubernetes.io/projected/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-kube-api-access-vjfn6\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.601853 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.604629 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4475b87f-7d9c-4a7d-aa85-fcce45d805ae" (UID: "4475b87f-7d9c-4a7d-aa85-fcce45d805ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:25 crc kubenswrapper[5024]: I1007 12:31:25.703853 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4475b87f-7d9c-4a7d-aa85-fcce45d805ae-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.254500 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wh6d2" event={"ID":"563d2566-d201-4094-9f4d-20a167bfd0f7","Type":"ContainerStarted","Data":"7d55113f41b8f583a007b8c510f1b2e07d8dc8612f09119f1fb7495e4c8256ba"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.254841 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.254983 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.255029 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.257308 5024 generic.go:334] "Generic (PLEG): container finished" podID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerID="31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.257344 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerDied","Data":"31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.259813 5024 generic.go:334] "Generic (PLEG): container finished" podID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerID="6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.259900 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerDied","Data":"6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.267868 5024 generic.go:334] "Generic (PLEG): container finished" podID="b060128a-9755-499a-b0f4-d9fc67649e66" containerID="ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.267951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerDied","Data":"ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.274089 5024 generic.go:334] "Generic (PLEG): container finished" podID="c12e98a2-133d-411c-b54a-e303efbe8889" containerID="2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.274220 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerDied","Data":"2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.279197 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerStarted","Data":"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.283309 5024 generic.go:334] "Generic (PLEG): container finished" podID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerID="88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.283336 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerDied","Data":"88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.290643 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw9jc" event={"ID":"4475b87f-7d9c-4a7d-aa85-fcce45d805ae","Type":"ContainerDied","Data":"e61c997c2c384b9b0972ca198cbe5d2daccb963865dcc0584d89e6e6107b00b3"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.290661 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw9jc" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.290707 5024 scope.go:117] "RemoveContainer" containerID="ddc42d836712f9eefd34ba91d6ad060090e2b7fe456407cc2da0bbcaf55f01e2" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.293436 5024 generic.go:334] "Generic (PLEG): container finished" podID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerID="47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f" exitCode=0 Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.293495 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerDied","Data":"47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.343877 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1"} Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.445082 5024 scope.go:117] "RemoveContainer" containerID="3120728c8609d6dfe46524a49b487061851d36739df252085606a0d2bfcea01c" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.475075 5024 scope.go:117] "RemoveContainer" containerID="d697385c5b833453263ac33253cdd70995d9e0178d7d279fc9dfd02399c63170" Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.481028 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.486653 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tw9jc"] Oct 07 12:31:26 crc kubenswrapper[5024]: I1007 12:31:26.761330 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" path="/var/lib/kubelet/pods/4475b87f-7d9c-4a7d-aa85-fcce45d805ae/volumes" Oct 07 12:31:27 crc kubenswrapper[5024]: I1007 12:31:27.352829 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerID="1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647" exitCode=0 Oct 07 12:31:27 crc kubenswrapper[5024]: I1007 12:31:27.352934 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerDied","Data":"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647"} Oct 07 12:31:27 crc kubenswrapper[5024]: I1007 12:31:27.353962 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-wh6d2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Oct 07 12:31:27 crc kubenswrapper[5024]: I1007 12:31:27.354043 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wh6d2" podUID="563d2566-d201-4094-9f4d-20a167bfd0f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Oct 07 12:31:29 crc kubenswrapper[5024]: I1007 12:31:29.366409 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerStarted","Data":"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902"} Oct 07 12:31:30 crc kubenswrapper[5024]: I1007 12:31:30.400553 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkrsk" podStartSLOduration=5.427945666 podStartE2EDuration="1m12.400535682s" podCreationTimestamp="2025-10-07 12:30:18 +0000 UTC" firstStartedPulling="2025-10-07 12:30:20.801219002 +0000 UTC m=+158.877005840" lastFinishedPulling="2025-10-07 12:31:27.773809018 +0000 UTC m=+225.849595856" observedRunningTime="2025-10-07 12:31:30.397638332 +0000 UTC m=+228.473425190" watchObservedRunningTime="2025-10-07 12:31:30.400535682 +0000 UTC m=+228.476322510" Oct 07 12:31:32 crc kubenswrapper[5024]: I1007 12:31:32.386513 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerStarted","Data":"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740"} Oct 07 12:31:32 crc kubenswrapper[5024]: I1007 12:31:32.404473 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dsmzn" podStartSLOduration=4.175042716 podStartE2EDuration="1m12.404457728s" podCreationTimestamp="2025-10-07 12:30:20 +0000 UTC" firstStartedPulling="2025-10-07 12:30:22.848187066 +0000 UTC m=+160.923973904" lastFinishedPulling="2025-10-07 12:31:31.077602078 +0000 UTC m=+229.153388916" observedRunningTime="2025-10-07 12:31:32.40234604 +0000 UTC m=+230.478132878" watchObservedRunningTime="2025-10-07 12:31:32.404457728 +0000 UTC m=+230.480244566" Oct 07 12:31:33 crc kubenswrapper[5024]: I1007 12:31:33.396211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerStarted","Data":"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902"} Oct 07 12:31:33 crc kubenswrapper[5024]: I1007 12:31:33.411870 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2blfn" podStartSLOduration=3.575073157 podStartE2EDuration="1m13.411856056s" podCreationTimestamp="2025-10-07 12:30:20 +0000 UTC" firstStartedPulling="2025-10-07 12:30:22.845329367 +0000 UTC m=+160.921116205" lastFinishedPulling="2025-10-07 12:31:32.682112266 +0000 UTC m=+230.757899104" observedRunningTime="2025-10-07 12:31:33.410967741 +0000 UTC m=+231.486754579" watchObservedRunningTime="2025-10-07 12:31:33.411856056 +0000 UTC m=+231.487642894" Oct 07 12:31:33 crc kubenswrapper[5024]: I1007 12:31:33.485931 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wh6d2" Oct 07 12:31:38 crc kubenswrapper[5024]: I1007 12:31:38.826941 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:38 crc kubenswrapper[5024]: I1007 12:31:38.827735 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:38 crc kubenswrapper[5024]: I1007 12:31:38.890047 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:39 crc kubenswrapper[5024]: I1007 12:31:39.465283 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:39 crc kubenswrapper[5024]: I1007 12:31:39.523422 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:31:40 crc kubenswrapper[5024]: I1007 12:31:40.605091 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:31:40 crc kubenswrapper[5024]: I1007 12:31:40.605702 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:31:40 crc kubenswrapper[5024]: I1007 12:31:40.650680 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.002607 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.002666 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.036565 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.436307 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerStarted","Data":"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84"} Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.438180 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerStarted","Data":"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179"} Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.439946 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerStarted","Data":"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3"} Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.442431 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerStarted","Data":"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd"} Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.442886 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fkrsk" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="registry-server" containerID="cri-o://3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902" gracePeriod=2 Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.455932 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rsgpt" podStartSLOduration=5.035275362 podStartE2EDuration="1m23.455916444s" podCreationTimestamp="2025-10-07 12:30:18 +0000 UTC" firstStartedPulling="2025-10-07 12:30:20.81059033 +0000 UTC m=+158.886377168" lastFinishedPulling="2025-10-07 12:31:39.231231392 +0000 UTC m=+237.307018250" observedRunningTime="2025-10-07 12:31:41.452602612 +0000 UTC m=+239.528389480" watchObservedRunningTime="2025-10-07 12:31:41.455916444 +0000 UTC m=+239.531703282" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.472447 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gk8sd" podStartSLOduration=5.847638835 podStartE2EDuration="1m23.47243121s" podCreationTimestamp="2025-10-07 12:30:18 +0000 UTC" firstStartedPulling="2025-10-07 12:30:20.79572284 +0000 UTC m=+158.871509678" lastFinishedPulling="2025-10-07 12:31:38.420515215 +0000 UTC m=+236.496302053" observedRunningTime="2025-10-07 12:31:41.469708775 +0000 UTC m=+239.545495613" watchObservedRunningTime="2025-10-07 12:31:41.47243121 +0000 UTC m=+239.548218048" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.493304 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.501710 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.514568 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w84bt" podStartSLOduration=4.767198971 podStartE2EDuration="1m20.514550163s" podCreationTimestamp="2025-10-07 12:30:21 +0000 UTC" firstStartedPulling="2025-10-07 12:30:23.879540186 +0000 UTC m=+161.955327024" lastFinishedPulling="2025-10-07 12:31:39.626891368 +0000 UTC m=+237.702678216" observedRunningTime="2025-10-07 12:31:41.512660361 +0000 UTC m=+239.588447219" watchObservedRunningTime="2025-10-07 12:31:41.514550163 +0000 UTC m=+239.590337011" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.535967 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hwthk" podStartSLOduration=3.344427361 podStartE2EDuration="1m23.535947574s" podCreationTimestamp="2025-10-07 12:30:18 +0000 UTC" firstStartedPulling="2025-10-07 12:30:19.774437928 +0000 UTC m=+157.850224766" lastFinishedPulling="2025-10-07 12:31:39.965958101 +0000 UTC m=+238.041744979" observedRunningTime="2025-10-07 12:31:41.532451937 +0000 UTC m=+239.608238775" watchObservedRunningTime="2025-10-07 12:31:41.535947574 +0000 UTC m=+239.611734422" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.829547 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.887490 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.887549 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.916976 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content\") pod \"b060128a-9755-499a-b0f4-d9fc67649e66\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.917099 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49lpf\" (UniqueName: \"kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf\") pod \"b060128a-9755-499a-b0f4-d9fc67649e66\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.917172 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities\") pod \"b060128a-9755-499a-b0f4-d9fc67649e66\" (UID: \"b060128a-9755-499a-b0f4-d9fc67649e66\") " Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.917931 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities" (OuterVolumeSpecName: "utilities") pod "b060128a-9755-499a-b0f4-d9fc67649e66" (UID: "b060128a-9755-499a-b0f4-d9fc67649e66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.922854 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf" (OuterVolumeSpecName: "kube-api-access-49lpf") pod "b060128a-9755-499a-b0f4-d9fc67649e66" (UID: "b060128a-9755-499a-b0f4-d9fc67649e66"). InnerVolumeSpecName "kube-api-access-49lpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:31:41 crc kubenswrapper[5024]: I1007 12:31:41.984434 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b060128a-9755-499a-b0f4-d9fc67649e66" (UID: "b060128a-9755-499a-b0f4-d9fc67649e66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.017883 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49lpf\" (UniqueName: \"kubernetes.io/projected/b060128a-9755-499a-b0f4-d9fc67649e66-kube-api-access-49lpf\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.017927 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.017940 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b060128a-9755-499a-b0f4-d9fc67649e66-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.449825 5024 generic.go:334] "Generic (PLEG): container finished" podID="b060128a-9755-499a-b0f4-d9fc67649e66" containerID="3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902" exitCode=0 Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.450177 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkrsk" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.450033 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerDied","Data":"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902"} Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.450274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkrsk" event={"ID":"b060128a-9755-499a-b0f4-d9fc67649e66","Type":"ContainerDied","Data":"4370f3ccc9cbf08cfa6be96a0b47f3c3aec7c60f8af0b7c0f2465b5c3c37287c"} Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.450298 5024 scope.go:117] "RemoveContainer" containerID="3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.469291 5024 scope.go:117] "RemoveContainer" containerID="ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.485324 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.492775 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fkrsk"] Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.497804 5024 scope.go:117] "RemoveContainer" containerID="ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.513294 5024 scope.go:117] "RemoveContainer" containerID="3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902" Oct 07 12:31:42 crc kubenswrapper[5024]: E1007 12:31:42.513803 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902\": container with ID starting with 3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902 not found: ID does not exist" containerID="3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.513854 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902"} err="failed to get container status \"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902\": rpc error: code = NotFound desc = could not find container \"3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902\": container with ID starting with 3473800f91a428c451a3cfef980d11187de3d68f819c04c22ec36cd0eace8902 not found: ID does not exist" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.513887 5024 scope.go:117] "RemoveContainer" containerID="ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff" Oct 07 12:31:42 crc kubenswrapper[5024]: E1007 12:31:42.514186 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff\": container with ID starting with ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff not found: ID does not exist" containerID="ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.514215 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff"} err="failed to get container status \"ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff\": rpc error: code = NotFound desc = could not find container \"ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff\": container with ID starting with ec396f2b6a793b976b1ff2b3d6b14987635f61260b014258501059daeb626eff not found: ID does not exist" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.514236 5024 scope.go:117] "RemoveContainer" containerID="ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864" Oct 07 12:31:42 crc kubenswrapper[5024]: E1007 12:31:42.514450 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864\": container with ID starting with ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864 not found: ID does not exist" containerID="ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.514467 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864"} err="failed to get container status \"ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864\": rpc error: code = NotFound desc = could not find container \"ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864\": container with ID starting with ba0db7ab4cdfd77ea698390e1b2b706a558f00e2978b1af22ec7ce880ff38864 not found: ID does not exist" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.760345 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" path="/var/lib/kubelet/pods/b060128a-9755-499a-b0f4-d9fc67649e66/volumes" Oct 07 12:31:42 crc kubenswrapper[5024]: I1007 12:31:42.925345 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w84bt" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="registry-server" probeResult="failure" output=< Oct 07 12:31:42 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 12:31:42 crc kubenswrapper[5024]: > Oct 07 12:31:43 crc kubenswrapper[5024]: I1007 12:31:43.517369 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:31:43 crc kubenswrapper[5024]: I1007 12:31:43.517589 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2blfn" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="registry-server" containerID="cri-o://b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902" gracePeriod=2 Oct 07 12:31:43 crc kubenswrapper[5024]: I1007 12:31:43.970497 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.039990 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mln5w\" (UniqueName: \"kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w\") pod \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.040050 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities\") pod \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.040072 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content\") pod \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\" (UID: \"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0\") " Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.041060 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities" (OuterVolumeSpecName: "utilities") pod "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" (UID: "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.045528 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w" (OuterVolumeSpecName: "kube-api-access-mln5w") pod "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" (UID: "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0"). InnerVolumeSpecName "kube-api-access-mln5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.054808 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" (UID: "9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.141775 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mln5w\" (UniqueName: \"kubernetes.io/projected/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-kube-api-access-mln5w\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.141817 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.141828 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.460509 5024 generic.go:334] "Generic (PLEG): container finished" podID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerID="b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902" exitCode=0 Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.460555 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerDied","Data":"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902"} Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.460575 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2blfn" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.460598 5024 scope.go:117] "RemoveContainer" containerID="b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.460587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2blfn" event={"ID":"9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0","Type":"ContainerDied","Data":"58cfc1b95b7e06de6911121ea31959fed11cf68a9f00f650619bd6c631c25869"} Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.484679 5024 scope.go:117] "RemoveContainer" containerID="47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.496430 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.500962 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2blfn"] Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.510551 5024 scope.go:117] "RemoveContainer" containerID="c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.534249 5024 scope.go:117] "RemoveContainer" containerID="b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902" Oct 07 12:31:44 crc kubenswrapper[5024]: E1007 12:31:44.534734 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902\": container with ID starting with b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902 not found: ID does not exist" containerID="b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.534775 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902"} err="failed to get container status \"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902\": rpc error: code = NotFound desc = could not find container \"b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902\": container with ID starting with b95bf708e7b6a7e4de356003ea62fcd37f5a7f7688f17faaa2d1cf59b3f27902 not found: ID does not exist" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.534803 5024 scope.go:117] "RemoveContainer" containerID="47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f" Oct 07 12:31:44 crc kubenswrapper[5024]: E1007 12:31:44.535063 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f\": container with ID starting with 47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f not found: ID does not exist" containerID="47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.535089 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f"} err="failed to get container status \"47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f\": rpc error: code = NotFound desc = could not find container \"47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f\": container with ID starting with 47324e0ccc8178ef20027fe0b99e212af1ed805f3dce889ead14261d8ae24f0f not found: ID does not exist" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.535108 5024 scope.go:117] "RemoveContainer" containerID="c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116" Oct 07 12:31:44 crc kubenswrapper[5024]: E1007 12:31:44.535307 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116\": container with ID starting with c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116 not found: ID does not exist" containerID="c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.535332 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116"} err="failed to get container status \"c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116\": rpc error: code = NotFound desc = could not find container \"c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116\": container with ID starting with c7a0a3dd6db115c2cf6bb9af623fec5d4e34eefce05a54876b811668e9f39116 not found: ID does not exist" Oct 07 12:31:44 crc kubenswrapper[5024]: I1007 12:31:44.758796 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" path="/var/lib/kubelet/pods/9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0/volumes" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.387705 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.388404 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.429421 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.516198 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.884975 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.885025 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:31:48 crc kubenswrapper[5024]: I1007 12:31:48.925803 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:31:49 crc kubenswrapper[5024]: I1007 12:31:49.288667 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:49 crc kubenswrapper[5024]: I1007 12:31:49.288751 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:49 crc kubenswrapper[5024]: I1007 12:31:49.326174 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:49 crc kubenswrapper[5024]: I1007 12:31:49.517101 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:49 crc kubenswrapper[5024]: I1007 12:31:49.517176 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:31:50 crc kubenswrapper[5024]: I1007 12:31:50.920041 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.492823 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gk8sd" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="registry-server" containerID="cri-o://b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179" gracePeriod=2 Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.843727 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.927309 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.941179 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities\") pod \"c12e98a2-133d-411c-b54a-e303efbe8889\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.941360 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsr27\" (UniqueName: \"kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27\") pod \"c12e98a2-133d-411c-b54a-e303efbe8889\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.941440 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content\") pod \"c12e98a2-133d-411c-b54a-e303efbe8889\" (UID: \"c12e98a2-133d-411c-b54a-e303efbe8889\") " Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.942039 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities" (OuterVolumeSpecName: "utilities") pod "c12e98a2-133d-411c-b54a-e303efbe8889" (UID: "c12e98a2-133d-411c-b54a-e303efbe8889"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.946587 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27" (OuterVolumeSpecName: "kube-api-access-hsr27") pod "c12e98a2-133d-411c-b54a-e303efbe8889" (UID: "c12e98a2-133d-411c-b54a-e303efbe8889"). InnerVolumeSpecName "kube-api-access-hsr27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.981366 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:31:51 crc kubenswrapper[5024]: I1007 12:31:51.999122 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c12e98a2-133d-411c-b54a-e303efbe8889" (UID: "c12e98a2-133d-411c-b54a-e303efbe8889"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.047704 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsr27\" (UniqueName: \"kubernetes.io/projected/c12e98a2-133d-411c-b54a-e303efbe8889-kube-api-access-hsr27\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.047743 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.047754 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c12e98a2-133d-411c-b54a-e303efbe8889-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.500986 5024 generic.go:334] "Generic (PLEG): container finished" podID="c12e98a2-133d-411c-b54a-e303efbe8889" containerID="b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179" exitCode=0 Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.501063 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk8sd" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.501090 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerDied","Data":"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179"} Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.501161 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk8sd" event={"ID":"c12e98a2-133d-411c-b54a-e303efbe8889","Type":"ContainerDied","Data":"8cd860d9fffe7a40a298e82c631c9fab2ff1407eb68deead0e74b5672de56080"} Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.501183 5024 scope.go:117] "RemoveContainer" containerID="b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.521657 5024 scope.go:117] "RemoveContainer" containerID="2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.532932 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.538961 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gk8sd"] Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.562637 5024 scope.go:117] "RemoveContainer" containerID="b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.576742 5024 scope.go:117] "RemoveContainer" containerID="b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179" Oct 07 12:31:52 crc kubenswrapper[5024]: E1007 12:31:52.577270 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179\": container with ID starting with b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179 not found: ID does not exist" containerID="b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.577322 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179"} err="failed to get container status \"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179\": rpc error: code = NotFound desc = could not find container \"b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179\": container with ID starting with b2f41808a7e327f6fc280ecd14e9998de16031d4711830b7598849d0404a5179 not found: ID does not exist" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.577352 5024 scope.go:117] "RemoveContainer" containerID="2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a" Oct 07 12:31:52 crc kubenswrapper[5024]: E1007 12:31:52.577672 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a\": container with ID starting with 2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a not found: ID does not exist" containerID="2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.577714 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a"} err="failed to get container status \"2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a\": rpc error: code = NotFound desc = could not find container \"2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a\": container with ID starting with 2ba4fcd35483f9a674d24481ce1ec3fc1ee9f0c644eda90f05156e6592cebf1a not found: ID does not exist" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.577743 5024 scope.go:117] "RemoveContainer" containerID="b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169" Oct 07 12:31:52 crc kubenswrapper[5024]: E1007 12:31:52.577982 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169\": container with ID starting with b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169 not found: ID does not exist" containerID="b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.578006 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169"} err="failed to get container status \"b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169\": rpc error: code = NotFound desc = could not find container \"b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169\": container with ID starting with b9524e32a0fa4812123679f859cc81b2c646efbb274be86107270877a4b30169 not found: ID does not exist" Oct 07 12:31:52 crc kubenswrapper[5024]: I1007 12:31:52.759270 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" path="/var/lib/kubelet/pods/c12e98a2-133d-411c-b54a-e303efbe8889/volumes" Oct 07 12:32:14 crc kubenswrapper[5024]: I1007 12:32:14.107099 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.133467 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" containerID="cri-o://7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807" gracePeriod=15 Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.575683 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.603462 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.603639 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.603873 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604237 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604474 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604509 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg248\" (UniqueName: \"kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604582 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604605 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604641 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604679 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604709 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.604758 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies\") pod \"06411252-fabf-416c-8b3f-3cb830b235f4\" (UID: \"06411252-fabf-416c-8b3f-3cb830b235f4\") " Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.607340 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.610208 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.611407 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.611439 5024 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06411252-fabf-416c-8b3f-3cb830b235f4-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.612393 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.613059 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.615608 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.615960 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248" (OuterVolumeSpecName: "kube-api-access-kg248") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "kube-api-access-kg248". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.617108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.619077 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.620157 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.620417 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.620735 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.620916 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.623757 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt"] Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624024 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624044 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624061 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624069 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624079 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624087 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624099 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624107 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624115 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624124 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624155 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624167 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624181 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624192 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624212 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624224 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624241 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624253 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624269 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624280 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="extract-utilities" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624291 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624299 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624312 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624320 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="extract-content" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624329 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7138cb-78ee-4467-be40-7d97cf6ed0e9" containerName="pruner" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624338 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7138cb-78ee-4467-be40-7d97cf6ed0e9" containerName="pruner" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.624351 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624359 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624472 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4475b87f-7d9c-4a7d-aa85-fcce45d805ae" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624485 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b37a99d-f44f-4644-8ed5-f8bfaf4f13a0" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624499 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12e98a2-133d-411c-b54a-e303efbe8889" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624511 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" containerName="oauth-openshift" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624525 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b060128a-9755-499a-b0f4-d9fc67649e66" containerName="registry-server" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624539 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7138cb-78ee-4467-be40-7d97cf6ed0e9" containerName="pruner" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.624970 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.629845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt"] Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.632456 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.639869 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "06411252-fabf-416c-8b3f-3cb830b235f4" (UID: "06411252-fabf-416c-8b3f-3cb830b235f4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712099 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-dir\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712176 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712213 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712329 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712386 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712413 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712446 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712480 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712537 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712559 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-policies\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712656 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712722 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5xn\" (UniqueName: \"kubernetes.io/projected/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-kube-api-access-8k5xn\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712793 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712805 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712817 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg248\" (UniqueName: \"kubernetes.io/projected/06411252-fabf-416c-8b3f-3cb830b235f4-kube-api-access-kg248\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712827 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712836 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712846 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712855 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712864 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712874 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712884 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712893 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.712903 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06411252-fabf-416c-8b3f-3cb830b235f4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.739794 5024 generic.go:334] "Generic (PLEG): container finished" podID="06411252-fabf-416c-8b3f-3cb830b235f4" containerID="7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807" exitCode=0 Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.739846 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.739852 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" event={"ID":"06411252-fabf-416c-8b3f-3cb830b235f4","Type":"ContainerDied","Data":"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807"} Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.739890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kw42v" event={"ID":"06411252-fabf-416c-8b3f-3cb830b235f4","Type":"ContainerDied","Data":"659cb54e304323ad95953132a0c6a0e35cefb0c6cbd3b6cae117fcf199bb50dc"} Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.739920 5024 scope.go:117] "RemoveContainer" containerID="7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.758504 5024 scope.go:117] "RemoveContainer" containerID="7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807" Oct 07 12:32:39 crc kubenswrapper[5024]: E1007 12:32:39.760568 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807\": container with ID starting with 7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807 not found: ID does not exist" containerID="7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.760617 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807"} err="failed to get container status \"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807\": rpc error: code = NotFound desc = could not find container \"7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807\": container with ID starting with 7259de77f543e67131ade53a3a6e1fe2214ecc19fb790137152127d1fbb13807 not found: ID does not exist" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.775668 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.778927 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kw42v"] Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814029 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k5xn\" (UniqueName: \"kubernetes.io/projected/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-kube-api-access-8k5xn\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814069 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-dir\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814103 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814121 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814182 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814203 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814221 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814239 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814259 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814275 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-dir\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814294 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-policies\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814614 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814689 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.814748 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.815348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.815978 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.816088 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-audit-policies\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.816359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.818268 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.818325 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.818328 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.818598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.819633 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.819708 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.820046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.843606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.846473 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k5xn\" (UniqueName: \"kubernetes.io/projected/c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22-kube-api-access-8k5xn\") pod \"oauth-openshift-5bf578b4ff-5n2rt\" (UID: \"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:39 crc kubenswrapper[5024]: I1007 12:32:39.965689 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.128321 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt"] Oct 07 12:32:40 crc kubenswrapper[5024]: W1007 12:32:40.135539 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc719d7e9_b0d8_4898_a5a8_75a2e0c6ba22.slice/crio-305a66ce91bd06aedc202fd8aedbcc00a24d191ee277cb714ac578f6d2a8852a WatchSource:0}: Error finding container 305a66ce91bd06aedc202fd8aedbcc00a24d191ee277cb714ac578f6d2a8852a: Status 404 returned error can't find the container with id 305a66ce91bd06aedc202fd8aedbcc00a24d191ee277cb714ac578f6d2a8852a Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.747267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" event={"ID":"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22","Type":"ContainerStarted","Data":"03cfb7ea5dd1875bb5821b213a616fc00b68c9e9621c35b6a0bb7ad4c462708d"} Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.748233 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.748253 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" event={"ID":"c719d7e9-b0d8-4898-a5a8-75a2e0c6ba22","Type":"ContainerStarted","Data":"305a66ce91bd06aedc202fd8aedbcc00a24d191ee277cb714ac578f6d2a8852a"} Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.757432 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06411252-fabf-416c-8b3f-3cb830b235f4" path="/var/lib/kubelet/pods/06411252-fabf-416c-8b3f-3cb830b235f4/volumes" Oct 07 12:32:40 crc kubenswrapper[5024]: I1007 12:32:40.766596 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" podStartSLOduration=26.766573554 podStartE2EDuration="26.766573554s" podCreationTimestamp="2025-10-07 12:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:32:40.765851423 +0000 UTC m=+298.841638271" watchObservedRunningTime="2025-10-07 12:32:40.766573554 +0000 UTC m=+298.842360392" Oct 07 12:32:41 crc kubenswrapper[5024]: I1007 12:32:41.016852 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5bf578b4ff-5n2rt" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.269841 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.270982 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rsgpt" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="registry-server" containerID="cri-o://1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84" gracePeriod=30 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.279363 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.279997 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hwthk" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="registry-server" containerID="cri-o://42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd" gracePeriod=30 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.283079 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.283948 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" containerID="cri-o://1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e" gracePeriod=30 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.299010 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.299475 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsmzn" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="registry-server" containerID="cri-o://d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740" gracePeriod=30 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.302623 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7cd9"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.303309 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.321526 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.321814 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w84bt" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="registry-server" containerID="cri-o://e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3" gracePeriod=30 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.327651 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7cd9"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.425815 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.426263 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsdq\" (UniqueName: \"kubernetes.io/projected/0680117f-db9e-4d13-b02d-8a851e374b1f-kube-api-access-wrsdq\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.426326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.527125 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.527255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.527288 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrsdq\" (UniqueName: \"kubernetes.io/projected/0680117f-db9e-4d13-b02d-8a851e374b1f-kube-api-access-wrsdq\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.530709 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.537818 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0680117f-db9e-4d13-b02d-8a851e374b1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.544881 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrsdq\" (UniqueName: \"kubernetes.io/projected/0680117f-db9e-4d13-b02d-8a851e374b1f-kube-api-access-wrsdq\") pod \"marketplace-operator-79b997595-w7cd9\" (UID: \"0680117f-db9e-4d13-b02d-8a851e374b1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.640273 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.650347 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.692349 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.696063 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.700864 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.724373 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwd5\" (UniqueName: \"kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5\") pod \"8af207aa-2798-477a-80e7-d8c7377fa8f4\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835528 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities\") pod \"1080756c-912c-4750-b8b3-df0cc6e623f7\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835558 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities\") pod \"e8805154-72fd-434b-88f8-9cb3ca239aa9\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835595 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities\") pod \"8af207aa-2798-477a-80e7-d8c7377fa8f4\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835646 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca\") pod \"447bfecb-a799-47cc-ad14-2a10bc594d95\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835677 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrgch\" (UniqueName: \"kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch\") pod \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835699 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nqqh\" (UniqueName: \"kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh\") pod \"1080756c-912c-4750-b8b3-df0cc6e623f7\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.835726 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content\") pod \"1080756c-912c-4750-b8b3-df0cc6e623f7\" (UID: \"1080756c-912c-4750-b8b3-df0cc6e623f7\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.836666 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities" (OuterVolumeSpecName: "utilities") pod "1080756c-912c-4750-b8b3-df0cc6e623f7" (UID: "1080756c-912c-4750-b8b3-df0cc6e623f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.837125 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities" (OuterVolumeSpecName: "utilities") pod "e8805154-72fd-434b-88f8-9cb3ca239aa9" (UID: "e8805154-72fd-434b-88f8-9cb3ca239aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.837188 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "447bfecb-a799-47cc-ad14-2a10bc594d95" (UID: "447bfecb-a799-47cc-ad14-2a10bc594d95"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.837262 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities" (OuterVolumeSpecName: "utilities") pod "8af207aa-2798-477a-80e7-d8c7377fa8f4" (UID: "8af207aa-2798-477a-80e7-d8c7377fa8f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.837759 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xz64\" (UniqueName: \"kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64\") pod \"e8805154-72fd-434b-88f8-9cb3ca239aa9\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.838757 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities" (OuterVolumeSpecName: "utilities") pod "0007f6bb-883d-4bb8-b3ee-4c37095c342d" (UID: "0007f6bb-883d-4bb8-b3ee-4c37095c342d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.837885 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities\") pod \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.839557 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh" (OuterVolumeSpecName: "kube-api-access-2nqqh") pod "1080756c-912c-4750-b8b3-df0cc6e623f7" (UID: "1080756c-912c-4750-b8b3-df0cc6e623f7"). InnerVolumeSpecName "kube-api-access-2nqqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.839578 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch" (OuterVolumeSpecName: "kube-api-access-vrgch") pod "0007f6bb-883d-4bb8-b3ee-4c37095c342d" (UID: "0007f6bb-883d-4bb8-b3ee-4c37095c342d"). InnerVolumeSpecName "kube-api-access-vrgch". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.839881 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5" (OuterVolumeSpecName: "kube-api-access-6nwd5") pod "8af207aa-2798-477a-80e7-d8c7377fa8f4" (UID: "8af207aa-2798-477a-80e7-d8c7377fa8f4"). InnerVolumeSpecName "kube-api-access-6nwd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.841469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64" (OuterVolumeSpecName: "kube-api-access-2xz64") pod "e8805154-72fd-434b-88f8-9cb3ca239aa9" (UID: "e8805154-72fd-434b-88f8-9cb3ca239aa9"). InnerVolumeSpecName "kube-api-access-2xz64". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.838869 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content\") pod \"e8805154-72fd-434b-88f8-9cb3ca239aa9\" (UID: \"e8805154-72fd-434b-88f8-9cb3ca239aa9\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843254 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content\") pod \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\" (UID: \"0007f6bb-883d-4bb8-b3ee-4c37095c342d\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843296 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics\") pod \"447bfecb-a799-47cc-ad14-2a10bc594d95\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843351 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drc7k\" (UniqueName: \"kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k\") pod \"447bfecb-a799-47cc-ad14-2a10bc594d95\" (UID: \"447bfecb-a799-47cc-ad14-2a10bc594d95\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843374 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content\") pod \"8af207aa-2798-477a-80e7-d8c7377fa8f4\" (UID: \"8af207aa-2798-477a-80e7-d8c7377fa8f4\") " Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843845 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nwd5\" (UniqueName: \"kubernetes.io/projected/8af207aa-2798-477a-80e7-d8c7377fa8f4-kube-api-access-6nwd5\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843870 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843884 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843895 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843908 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843920 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrgch\" (UniqueName: \"kubernetes.io/projected/0007f6bb-883d-4bb8-b3ee-4c37095c342d-kube-api-access-vrgch\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843933 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nqqh\" (UniqueName: \"kubernetes.io/projected/1080756c-912c-4750-b8b3-df0cc6e623f7-kube-api-access-2nqqh\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843943 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xz64\" (UniqueName: \"kubernetes.io/projected/e8805154-72fd-434b-88f8-9cb3ca239aa9-kube-api-access-2xz64\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.843954 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.854377 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "447bfecb-a799-47cc-ad14-2a10bc594d95" (UID: "447bfecb-a799-47cc-ad14-2a10bc594d95"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.857309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k" (OuterVolumeSpecName: "kube-api-access-drc7k") pod "447bfecb-a799-47cc-ad14-2a10bc594d95" (UID: "447bfecb-a799-47cc-ad14-2a10bc594d95"). InnerVolumeSpecName "kube-api-access-drc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.876598 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7cd9"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.882900 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8af207aa-2798-477a-80e7-d8c7377fa8f4" (UID: "8af207aa-2798-477a-80e7-d8c7377fa8f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.897776 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1080756c-912c-4750-b8b3-df0cc6e623f7" (UID: "1080756c-912c-4750-b8b3-df0cc6e623f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.903162 5024 generic.go:334] "Generic (PLEG): container finished" podID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerID="1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84" exitCode=0 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.903257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerDied","Data":"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.903292 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsgpt" event={"ID":"1080756c-912c-4750-b8b3-df0cc6e623f7","Type":"ContainerDied","Data":"db302175ef30feb270c4c6acd33ff1eb9f46c42a0191a47d8e0cbcca017472be"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.903314 5024 scope.go:117] "RemoveContainer" containerID="1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.903514 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsgpt" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.909243 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerID="e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3" exitCode=0 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.909364 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerDied","Data":"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.909414 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w84bt" event={"ID":"e8805154-72fd-434b-88f8-9cb3ca239aa9","Type":"ContainerDied","Data":"0423bbacdace5cfff49ed1305a96553f49f859e68ba8c330082cdad5d46c6fbb"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.909321 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w84bt" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.911707 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0007f6bb-883d-4bb8-b3ee-4c37095c342d" (UID: "0007f6bb-883d-4bb8-b3ee-4c37095c342d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.913880 5024 generic.go:334] "Generic (PLEG): container finished" podID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerID="42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd" exitCode=0 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.913920 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerDied","Data":"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.913956 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwthk" event={"ID":"0007f6bb-883d-4bb8-b3ee-4c37095c342d","Type":"ContainerDied","Data":"d6965c8e65070a849d334a70c8301c74586c6ea0e63ab5704a5d12f870ad09d4"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.913933 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwthk" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.917949 5024 generic.go:334] "Generic (PLEG): container finished" podID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerID="1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e" exitCode=0 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.918012 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" event={"ID":"447bfecb-a799-47cc-ad14-2a10bc594d95","Type":"ContainerDied","Data":"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.918035 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" event={"ID":"447bfecb-a799-47cc-ad14-2a10bc594d95","Type":"ContainerDied","Data":"a6ee43ef314b80752e276a4df270ceae2b9d672acaaa928eb547767d681f40bd"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.918096 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fqbxr" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.921297 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsmzn" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.921348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerDied","Data":"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.921199 5024 generic.go:334] "Generic (PLEG): container finished" podID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerID="d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740" exitCode=0 Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.923003 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsmzn" event={"ID":"8af207aa-2798-477a-80e7-d8c7377fa8f4","Type":"ContainerDied","Data":"0adb28aca3ddd823f1b028560b1f997e7f4f91b018c5f55eb1b72c25f7a7512d"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.923897 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" event={"ID":"0680117f-db9e-4d13-b02d-8a851e374b1f","Type":"ContainerStarted","Data":"518aa6cbc24493a939ef04595f91e6be001bb6614f16c17f7ba73c3d83da6b07"} Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.929261 5024 scope.go:117] "RemoveContainer" containerID="6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.945088 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drc7k\" (UniqueName: \"kubernetes.io/projected/447bfecb-a799-47cc-ad14-2a10bc594d95-kube-api-access-drc7k\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.945113 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af207aa-2798-477a-80e7-d8c7377fa8f4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.945150 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1080756c-912c-4750-b8b3-df0cc6e623f7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.945159 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0007f6bb-883d-4bb8-b3ee-4c37095c342d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.945168 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/447bfecb-a799-47cc-ad14-2a10bc594d95-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.954877 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.957123 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rsgpt"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.978605 5024 scope.go:117] "RemoveContainer" containerID="c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.980123 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.983850 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hwthk"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.987849 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.991195 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8805154-72fd-434b-88f8-9cb3ca239aa9" (UID: "e8805154-72fd-434b-88f8-9cb3ca239aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.993633 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fqbxr"] Oct 07 12:33:09 crc kubenswrapper[5024]: I1007 12:33:09.999108 5024 scope.go:117] "RemoveContainer" containerID="1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.001980 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84\": container with ID starting with 1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84 not found: ID does not exist" containerID="1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.002058 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84"} err="failed to get container status \"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84\": rpc error: code = NotFound desc = could not find container \"1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84\": container with ID starting with 1a0f19ad2cb05cea984faa5e55e76074aed5553c756d06f4dff0a6a6b2dc9b84 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.002087 5024 scope.go:117] "RemoveContainer" containerID="6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.002760 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e\": container with ID starting with 6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e not found: ID does not exist" containerID="6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.002790 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e"} err="failed to get container status \"6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e\": rpc error: code = NotFound desc = could not find container \"6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e\": container with ID starting with 6539f883753b60335e0e35865dc6bde270596b83edf7e5c7b0c00b66747da85e not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.002807 5024 scope.go:117] "RemoveContainer" containerID="c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.003053 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645\": container with ID starting with c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645 not found: ID does not exist" containerID="c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.003079 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645"} err="failed to get container status \"c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645\": rpc error: code = NotFound desc = could not find container \"c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645\": container with ID starting with c5db5f4a20a4bff7e0622b8589772bf40c2be9eac1544853983ae303ade32645 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.003096 5024 scope.go:117] "RemoveContainer" containerID="e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.015457 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.018490 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsmzn"] Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.028949 5024 scope.go:117] "RemoveContainer" containerID="1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.042601 5024 scope.go:117] "RemoveContainer" containerID="80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.045759 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8805154-72fd-434b-88f8-9cb3ca239aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.055398 5024 scope.go:117] "RemoveContainer" containerID="e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.055737 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3\": container with ID starting with e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3 not found: ID does not exist" containerID="e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.055808 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3"} err="failed to get container status \"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3\": rpc error: code = NotFound desc = could not find container \"e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3\": container with ID starting with e5f84d0dca6bd920d923b17102d4423fdf5c9bedcc5d5844c2e7090ab4ce3df3 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.055838 5024 scope.go:117] "RemoveContainer" containerID="1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.056130 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647\": container with ID starting with 1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647 not found: ID does not exist" containerID="1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.056183 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647"} err="failed to get container status \"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647\": rpc error: code = NotFound desc = could not find container \"1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647\": container with ID starting with 1bdfb303613cc0084e907409783bc5b135e52dbcd2bf6ce23203409f483cb647 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.056423 5024 scope.go:117] "RemoveContainer" containerID="80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.056734 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463\": container with ID starting with 80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463 not found: ID does not exist" containerID="80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.056846 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463"} err="failed to get container status \"80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463\": rpc error: code = NotFound desc = could not find container \"80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463\": container with ID starting with 80941f8f55a2f55c865e36591a98f563b0faa7e671410162d52f717e7d4de463 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.056958 5024 scope.go:117] "RemoveContainer" containerID="42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.070828 5024 scope.go:117] "RemoveContainer" containerID="88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.087634 5024 scope.go:117] "RemoveContainer" containerID="41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.099452 5024 scope.go:117] "RemoveContainer" containerID="42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.099896 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd\": container with ID starting with 42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd not found: ID does not exist" containerID="42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.099934 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd"} err="failed to get container status \"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd\": rpc error: code = NotFound desc = could not find container \"42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd\": container with ID starting with 42b352f1919aed03cf74adab082fa232ef765ac6ecce01232c970fa1ba7385fd not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.099962 5024 scope.go:117] "RemoveContainer" containerID="88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.100500 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4\": container with ID starting with 88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4 not found: ID does not exist" containerID="88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.100546 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4"} err="failed to get container status \"88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4\": rpc error: code = NotFound desc = could not find container \"88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4\": container with ID starting with 88099c17ba7f3dc18c16c18ae672d9da35fc371199d20d04cf1a5aecb0c63ac4 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.100577 5024 scope.go:117] "RemoveContainer" containerID="41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.100855 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331\": container with ID starting with 41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331 not found: ID does not exist" containerID="41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.100887 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331"} err="failed to get container status \"41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331\": rpc error: code = NotFound desc = could not find container \"41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331\": container with ID starting with 41957a0c428fd7830a92728785e043dd2d3e07ff35e8923835f0280bca7cb331 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.100906 5024 scope.go:117] "RemoveContainer" containerID="1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.115714 5024 scope.go:117] "RemoveContainer" containerID="1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.116102 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e\": container with ID starting with 1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e not found: ID does not exist" containerID="1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.116181 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e"} err="failed to get container status \"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e\": rpc error: code = NotFound desc = could not find container \"1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e\": container with ID starting with 1cce0b6734b7fd140b7932939ed40186da93521bb75f944205ae98299c1c1b3e not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.116210 5024 scope.go:117] "RemoveContainer" containerID="d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.128869 5024 scope.go:117] "RemoveContainer" containerID="31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.142255 5024 scope.go:117] "RemoveContainer" containerID="98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.156367 5024 scope.go:117] "RemoveContainer" containerID="d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.156776 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740\": container with ID starting with d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740 not found: ID does not exist" containerID="d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.156821 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740"} err="failed to get container status \"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740\": rpc error: code = NotFound desc = could not find container \"d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740\": container with ID starting with d800e69954694d64f92ce71abd7cc4d22615080a40f9d477fd7cfc3db9cd2740 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.156846 5024 scope.go:117] "RemoveContainer" containerID="31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.157266 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6\": container with ID starting with 31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6 not found: ID does not exist" containerID="31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.157288 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6"} err="failed to get container status \"31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6\": rpc error: code = NotFound desc = could not find container \"31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6\": container with ID starting with 31b3ad2187c78f6987d87470728ac498904f61b4d1cbf63722db35fca66661c6 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.157304 5024 scope.go:117] "RemoveContainer" containerID="98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516" Oct 07 12:33:10 crc kubenswrapper[5024]: E1007 12:33:10.157636 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516\": container with ID starting with 98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516 not found: ID does not exist" containerID="98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.157673 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516"} err="failed to get container status \"98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516\": rpc error: code = NotFound desc = could not find container \"98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516\": container with ID starting with 98ae4a363fa645e5434e375656b851b04fbfb286b16636b6dbcb52f1ee0ca516 not found: ID does not exist" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.236259 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.237622 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w84bt"] Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.757783 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" path="/var/lib/kubelet/pods/0007f6bb-883d-4bb8-b3ee-4c37095c342d/volumes" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.758867 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" path="/var/lib/kubelet/pods/1080756c-912c-4750-b8b3-df0cc6e623f7/volumes" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.759622 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" path="/var/lib/kubelet/pods/447bfecb-a799-47cc-ad14-2a10bc594d95/volumes" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.760198 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" path="/var/lib/kubelet/pods/8af207aa-2798-477a-80e7-d8c7377fa8f4/volumes" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.760897 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" path="/var/lib/kubelet/pods/e8805154-72fd-434b-88f8-9cb3ca239aa9/volumes" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.931501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" event={"ID":"0680117f-db9e-4d13-b02d-8a851e374b1f","Type":"ContainerStarted","Data":"bc5c5379fa1ac6dcffa8490a3586a76f1db35de37265751272794b15dbd672c4"} Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.932823 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.935542 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" Oct 07 12:33:10 crc kubenswrapper[5024]: I1007 12:33:10.952003 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w7cd9" podStartSLOduration=1.951986947 podStartE2EDuration="1.951986947s" podCreationTimestamp="2025-10-07 12:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:33:10.945101165 +0000 UTC m=+329.020888023" watchObservedRunningTime="2025-10-07 12:33:10.951986947 +0000 UTC m=+329.027773775" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489368 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zv7nb"] Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489570 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489584 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489595 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489601 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489611 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489618 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489625 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489631 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489640 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489646 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489655 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489660 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489670 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489676 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489682 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489689 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489698 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489704 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489713 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489719 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489727 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489732 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489743 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489749 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="extract-utilities" Oct 07 12:33:11 crc kubenswrapper[5024]: E1007 12:33:11.489757 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489763 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="extract-content" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489866 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="447bfecb-a799-47cc-ad14-2a10bc594d95" containerName="marketplace-operator" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489876 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af207aa-2798-477a-80e7-d8c7377fa8f4" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489889 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8805154-72fd-434b-88f8-9cb3ca239aa9" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489896 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0007f6bb-883d-4bb8-b3ee-4c37095c342d" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.489903 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1080756c-912c-4750-b8b3-df0cc6e623f7" containerName="registry-server" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.490556 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.492527 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.493812 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv7nb"] Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.562746 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-catalog-content\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.562823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-utilities\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.562857 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppmg\" (UniqueName: \"kubernetes.io/projected/637ca0b1-6e76-47ae-a1d0-373212a885fa-kube-api-access-sppmg\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.664104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-utilities\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.664183 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppmg\" (UniqueName: \"kubernetes.io/projected/637ca0b1-6e76-47ae-a1d0-373212a885fa-kube-api-access-sppmg\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.664608 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-utilities\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.664647 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-catalog-content\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.665074 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637ca0b1-6e76-47ae-a1d0-373212a885fa-catalog-content\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.695626 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.696263 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppmg\" (UniqueName: \"kubernetes.io/projected/637ca0b1-6e76-47ae-a1d0-373212a885fa-kube-api-access-sppmg\") pod \"redhat-marketplace-zv7nb\" (UID: \"637ca0b1-6e76-47ae-a1d0-373212a885fa\") " pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.697225 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.699522 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.713392 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.811298 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.867415 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.867463 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzp88\" (UniqueName: \"kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.867509 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.968878 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.969244 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp88\" (UniqueName: \"kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.969293 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.969384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.969640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.976257 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zv7nb"] Oct 07 12:33:11 crc kubenswrapper[5024]: I1007 12:33:11.986363 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp88\" (UniqueName: \"kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88\") pod \"redhat-operators-7cxfk\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.028321 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.177628 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:33:12 crc kubenswrapper[5024]: W1007 12:33:12.183456 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd3bcc6_e851_49ed_8b52_ed1ebfcf5564.slice/crio-8e93565af4bebcade4591b5e0cbe2a1884c2b959330459f8cceeae06c42822b0 WatchSource:0}: Error finding container 8e93565af4bebcade4591b5e0cbe2a1884c2b959330459f8cceeae06c42822b0: Status 404 returned error can't find the container with id 8e93565af4bebcade4591b5e0cbe2a1884c2b959330459f8cceeae06c42822b0 Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.945054 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerID="be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15" exitCode=0 Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.945175 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerDied","Data":"be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15"} Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.945385 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerStarted","Data":"8e93565af4bebcade4591b5e0cbe2a1884c2b959330459f8cceeae06c42822b0"} Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.948707 5024 generic.go:334] "Generic (PLEG): container finished" podID="637ca0b1-6e76-47ae-a1d0-373212a885fa" containerID="f381b28bcd9ebc78beb4594e53b3f18abfe8cfa2890aa4d4f2b10289651b7a32" exitCode=0 Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.949297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv7nb" event={"ID":"637ca0b1-6e76-47ae-a1d0-373212a885fa","Type":"ContainerDied","Data":"f381b28bcd9ebc78beb4594e53b3f18abfe8cfa2890aa4d4f2b10289651b7a32"} Oct 07 12:33:12 crc kubenswrapper[5024]: I1007 12:33:12.949343 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv7nb" event={"ID":"637ca0b1-6e76-47ae-a1d0-373212a885fa","Type":"ContainerStarted","Data":"e128bdd55108eece7c3beb4a0a80400d99505a89637f925935a45119d00c610f"} Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.893186 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h2zjw"] Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.894566 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.897433 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.900618 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2zjw"] Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.960898 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerStarted","Data":"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783"} Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.963269 5024 generic.go:334] "Generic (PLEG): container finished" podID="637ca0b1-6e76-47ae-a1d0-373212a885fa" containerID="3d5aa89a1a5dbd91523ec972483cc69bd2842ec28821bf8b9a3b21d2071ae644" exitCode=0 Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.963314 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv7nb" event={"ID":"637ca0b1-6e76-47ae-a1d0-373212a885fa","Type":"ContainerDied","Data":"3d5aa89a1a5dbd91523ec972483cc69bd2842ec28821bf8b9a3b21d2071ae644"} Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.993358 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-utilities\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.993440 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mln\" (UniqueName: \"kubernetes.io/projected/408f514a-005e-4955-bca5-de53bc46161b-kube-api-access-v6mln\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:13 crc kubenswrapper[5024]: I1007 12:33:13.993464 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-catalog-content\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.084844 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjz7t"] Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.086197 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.088082 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.092622 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjz7t"] Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.095979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-utilities\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.096073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mln\" (UniqueName: \"kubernetes.io/projected/408f514a-005e-4955-bca5-de53bc46161b-kube-api-access-v6mln\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.096100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-catalog-content\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.096614 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-catalog-content\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.096836 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408f514a-005e-4955-bca5-de53bc46161b-utilities\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.122210 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mln\" (UniqueName: \"kubernetes.io/projected/408f514a-005e-4955-bca5-de53bc46161b-kube-api-access-v6mln\") pod \"certified-operators-h2zjw\" (UID: \"408f514a-005e-4955-bca5-de53bc46161b\") " pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.196882 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jjd\" (UniqueName: \"kubernetes.io/projected/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-kube-api-access-v9jjd\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.196988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-utilities\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.197015 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-catalog-content\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.298615 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jjd\" (UniqueName: \"kubernetes.io/projected/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-kube-api-access-v9jjd\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.298702 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-utilities\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.298733 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-catalog-content\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.299274 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-catalog-content\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.299358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-utilities\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.304523 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.317194 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jjd\" (UniqueName: \"kubernetes.io/projected/f3b3a3ab-50e1-4fc2-942e-b860023a3bb5-kube-api-access-v9jjd\") pod \"community-operators-mjz7t\" (UID: \"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5\") " pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.400345 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.722329 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2zjw"] Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.803407 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjz7t"] Oct 07 12:33:14 crc kubenswrapper[5024]: W1007 12:33:14.813645 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b3a3ab_50e1_4fc2_942e_b860023a3bb5.slice/crio-dc146df4766fa369d18e19ed8e9508abcca045cd67251f11a2a452119d027cc2 WatchSource:0}: Error finding container dc146df4766fa369d18e19ed8e9508abcca045cd67251f11a2a452119d027cc2: Status 404 returned error can't find the container with id dc146df4766fa369d18e19ed8e9508abcca045cd67251f11a2a452119d027cc2 Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.976586 5024 generic.go:334] "Generic (PLEG): container finished" podID="408f514a-005e-4955-bca5-de53bc46161b" containerID="77cd36d3a31f960c3f471c6b8900ca73a80c77746240a0037f1a7461733347ca" exitCode=0 Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.976801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2zjw" event={"ID":"408f514a-005e-4955-bca5-de53bc46161b","Type":"ContainerDied","Data":"77cd36d3a31f960c3f471c6b8900ca73a80c77746240a0037f1a7461733347ca"} Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.976927 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2zjw" event={"ID":"408f514a-005e-4955-bca5-de53bc46161b","Type":"ContainerStarted","Data":"4e22f13bd329b21954b9bf5218eb099b6fcb2a0d25413978f4eb34a3aea0eed9"} Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.978234 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjz7t" event={"ID":"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5","Type":"ContainerStarted","Data":"dc146df4766fa369d18e19ed8e9508abcca045cd67251f11a2a452119d027cc2"} Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.982023 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zv7nb" event={"ID":"637ca0b1-6e76-47ae-a1d0-373212a885fa","Type":"ContainerStarted","Data":"206a6473a827a8a09c02b1d03a901e596a25af31b3f8f28fcc0ee3c90646ffd0"} Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.984709 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerID="3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783" exitCode=0 Oct 07 12:33:14 crc kubenswrapper[5024]: I1007 12:33:14.984746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerDied","Data":"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783"} Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.028027 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zv7nb" podStartSLOduration=2.466640892 podStartE2EDuration="4.028007121s" podCreationTimestamp="2025-10-07 12:33:11 +0000 UTC" firstStartedPulling="2025-10-07 12:33:12.953050914 +0000 UTC m=+331.028837752" lastFinishedPulling="2025-10-07 12:33:14.514417143 +0000 UTC m=+332.590203981" observedRunningTime="2025-10-07 12:33:15.02764324 +0000 UTC m=+333.103430078" watchObservedRunningTime="2025-10-07 12:33:15.028007121 +0000 UTC m=+333.103793959" Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.991617 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerStarted","Data":"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972"} Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.994881 5024 generic.go:334] "Generic (PLEG): container finished" podID="408f514a-005e-4955-bca5-de53bc46161b" containerID="f54463eb81972bde7b9edac0dd56355337007f41ce35e838b422e6d7d6ca246c" exitCode=0 Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.994956 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2zjw" event={"ID":"408f514a-005e-4955-bca5-de53bc46161b","Type":"ContainerDied","Data":"f54463eb81972bde7b9edac0dd56355337007f41ce35e838b422e6d7d6ca246c"} Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.997492 5024 generic.go:334] "Generic (PLEG): container finished" podID="f3b3a3ab-50e1-4fc2-942e-b860023a3bb5" containerID="759ea09e17ab26cb9256402bd1b81d3aebb47bd47b5f0190ec997f4cd11abe7e" exitCode=0 Oct 07 12:33:15 crc kubenswrapper[5024]: I1007 12:33:15.997581 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjz7t" event={"ID":"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5","Type":"ContainerDied","Data":"759ea09e17ab26cb9256402bd1b81d3aebb47bd47b5f0190ec997f4cd11abe7e"} Oct 07 12:33:16 crc kubenswrapper[5024]: I1007 12:33:16.015917 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7cxfk" podStartSLOduration=2.594347604 podStartE2EDuration="5.015897443s" podCreationTimestamp="2025-10-07 12:33:11 +0000 UTC" firstStartedPulling="2025-10-07 12:33:12.946728348 +0000 UTC m=+331.022515186" lastFinishedPulling="2025-10-07 12:33:15.368278187 +0000 UTC m=+333.444065025" observedRunningTime="2025-10-07 12:33:16.01305998 +0000 UTC m=+334.088846828" watchObservedRunningTime="2025-10-07 12:33:16.015897443 +0000 UTC m=+334.091684281" Oct 07 12:33:18 crc kubenswrapper[5024]: I1007 12:33:18.009211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2zjw" event={"ID":"408f514a-005e-4955-bca5-de53bc46161b","Type":"ContainerStarted","Data":"2831438e13d3de0b9d38a9ccc89d6b3ba2bd530f195d7889e552fe820bf7b0ca"} Oct 07 12:33:18 crc kubenswrapper[5024]: I1007 12:33:18.012385 5024 generic.go:334] "Generic (PLEG): container finished" podID="f3b3a3ab-50e1-4fc2-942e-b860023a3bb5" containerID="f550b21ebfcf0f7df9fefa03bff8fc1f910e3dbee57884ce4cbc070da43878ee" exitCode=0 Oct 07 12:33:18 crc kubenswrapper[5024]: I1007 12:33:18.012450 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjz7t" event={"ID":"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5","Type":"ContainerDied","Data":"f550b21ebfcf0f7df9fefa03bff8fc1f910e3dbee57884ce4cbc070da43878ee"} Oct 07 12:33:18 crc kubenswrapper[5024]: I1007 12:33:18.026853 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h2zjw" podStartSLOduration=3.553750633 podStartE2EDuration="5.026832469s" podCreationTimestamp="2025-10-07 12:33:13 +0000 UTC" firstStartedPulling="2025-10-07 12:33:14.978559928 +0000 UTC m=+333.054346766" lastFinishedPulling="2025-10-07 12:33:16.451641754 +0000 UTC m=+334.527428602" observedRunningTime="2025-10-07 12:33:18.026667564 +0000 UTC m=+336.102454402" watchObservedRunningTime="2025-10-07 12:33:18.026832469 +0000 UTC m=+336.102619307" Oct 07 12:33:19 crc kubenswrapper[5024]: I1007 12:33:19.019751 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjz7t" event={"ID":"f3b3a3ab-50e1-4fc2-942e-b860023a3bb5","Type":"ContainerStarted","Data":"e3e88e4ca73cd6120499ab7e0fcd536e26cd92212ab5213f7f19b00cfb7def56"} Oct 07 12:33:21 crc kubenswrapper[5024]: I1007 12:33:21.812207 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:21 crc kubenswrapper[5024]: I1007 12:33:21.812579 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:21 crc kubenswrapper[5024]: I1007 12:33:21.876910 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:21 crc kubenswrapper[5024]: I1007 12:33:21.898649 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjz7t" podStartSLOduration=5.368731591 podStartE2EDuration="7.898631363s" podCreationTimestamp="2025-10-07 12:33:14 +0000 UTC" firstStartedPulling="2025-10-07 12:33:16.000885512 +0000 UTC m=+334.076672370" lastFinishedPulling="2025-10-07 12:33:18.530785304 +0000 UTC m=+336.606572142" observedRunningTime="2025-10-07 12:33:19.037227392 +0000 UTC m=+337.113014230" watchObservedRunningTime="2025-10-07 12:33:21.898631363 +0000 UTC m=+339.974418201" Oct 07 12:33:22 crc kubenswrapper[5024]: I1007 12:33:22.029869 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:22 crc kubenswrapper[5024]: I1007 12:33:22.031216 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:22 crc kubenswrapper[5024]: I1007 12:33:22.075718 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:22 crc kubenswrapper[5024]: I1007 12:33:22.078216 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zv7nb" Oct 07 12:33:23 crc kubenswrapper[5024]: I1007 12:33:23.078864 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.305594 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.305664 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.344486 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.401119 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.401501 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:24 crc kubenswrapper[5024]: I1007 12:33:24.437705 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:25 crc kubenswrapper[5024]: I1007 12:33:25.085750 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h2zjw" Oct 07 12:33:25 crc kubenswrapper[5024]: I1007 12:33:25.089929 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjz7t" Oct 07 12:33:43 crc kubenswrapper[5024]: I1007 12:33:43.720580 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:33:43 crc kubenswrapper[5024]: I1007 12:33:43.721456 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:34:13 crc kubenswrapper[5024]: I1007 12:34:13.720036 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:34:13 crc kubenswrapper[5024]: I1007 12:34:13.720583 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:34:43 crc kubenswrapper[5024]: I1007 12:34:43.720928 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:34:43 crc kubenswrapper[5024]: I1007 12:34:43.721604 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:34:43 crc kubenswrapper[5024]: I1007 12:34:43.721668 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:34:43 crc kubenswrapper[5024]: I1007 12:34:43.722372 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:34:43 crc kubenswrapper[5024]: I1007 12:34:43.722447 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1" gracePeriod=600 Oct 07 12:34:44 crc kubenswrapper[5024]: I1007 12:34:44.442797 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1" exitCode=0 Oct 07 12:34:44 crc kubenswrapper[5024]: I1007 12:34:44.442896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1"} Oct 07 12:34:44 crc kubenswrapper[5024]: I1007 12:34:44.443162 5024 scope.go:117] "RemoveContainer" containerID="8240a64d5d438c3e82314de5182f6ec73c4dd934557773a3d83fdf52378ee6c4" Oct 07 12:34:45 crc kubenswrapper[5024]: I1007 12:34:45.449949 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e"} Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.705049 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8qv4w"] Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.706478 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.715916 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8qv4w"] Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.831775 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.831888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7z9\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-kube-api-access-2s7z9\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.831914 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.831951 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-bound-sa-token\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.831994 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-tls\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.832016 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.832041 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-certificates\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.832061 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-trusted-ca\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.856252 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.932830 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.932883 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s7z9\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-kube-api-access-2s7z9\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.932991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-bound-sa-token\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.933053 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-tls\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.933093 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.933156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-certificates\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.933184 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-trusted-ca\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.934251 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.935174 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-certificates\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.935338 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-trusted-ca\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.939453 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-registry-tls\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.939459 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.948985 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s7z9\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-kube-api-access-2s7z9\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:34 crc kubenswrapper[5024]: I1007 12:35:34.950294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2af8ce47-4ba5-4192-a107-9e1ba5f512f1-bound-sa-token\") pod \"image-registry-66df7c8f76-8qv4w\" (UID: \"2af8ce47-4ba5-4192-a107-9e1ba5f512f1\") " pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.025512 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.191861 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8qv4w"] Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.682777 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" event={"ID":"2af8ce47-4ba5-4192-a107-9e1ba5f512f1","Type":"ContainerStarted","Data":"4ab9ff91d8d44bef4e479323cbcdc66fa1423382a64573796235de8272c1ec3f"} Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.683169 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.683187 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" event={"ID":"2af8ce47-4ba5-4192-a107-9e1ba5f512f1","Type":"ContainerStarted","Data":"381999c09820628eaf0103c4442fa8f34f6e180a857577c80f2bb2302b62d696"} Oct 07 12:35:35 crc kubenswrapper[5024]: I1007 12:35:35.702527 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" podStartSLOduration=1.702511695 podStartE2EDuration="1.702511695s" podCreationTimestamp="2025-10-07 12:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:35:35.701580838 +0000 UTC m=+473.777367676" watchObservedRunningTime="2025-10-07 12:35:35.702511695 +0000 UTC m=+473.778298533" Oct 07 12:35:55 crc kubenswrapper[5024]: I1007 12:35:55.030130 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8qv4w" Oct 07 12:35:55 crc kubenswrapper[5024]: I1007 12:35:55.094440 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.163412 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" podUID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" containerName="registry" containerID="cri-o://8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b" gracePeriod=30 Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.528946 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.670202 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.670338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671386 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hbpw\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671614 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671663 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671686 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.671708 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates\") pod \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\" (UID: \"1f471f75-4e1e-4093-975b-e02e2b7f8b32\") " Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.672808 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.672993 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.676470 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw" (OuterVolumeSpecName: "kube-api-access-8hbpw") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "kube-api-access-8hbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.677026 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.677755 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.678035 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.695621 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.697667 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1f471f75-4e1e-4093-975b-e02e2b7f8b32" (UID: "1f471f75-4e1e-4093-975b-e02e2b7f8b32"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773562 5024 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f471f75-4e1e-4093-975b-e02e2b7f8b32-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773607 5024 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f471f75-4e1e-4093-975b-e02e2b7f8b32-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773620 5024 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773632 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773643 5024 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773656 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f471f75-4e1e-4093-975b-e02e2b7f8b32-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.773666 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hbpw\" (UniqueName: \"kubernetes.io/projected/1f471f75-4e1e-4093-975b-e02e2b7f8b32-kube-api-access-8hbpw\") on node \"crc\" DevicePath \"\"" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.917343 5024 generic.go:334] "Generic (PLEG): container finished" podID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" containerID="8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b" exitCode=0 Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.917394 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" event={"ID":"1f471f75-4e1e-4093-975b-e02e2b7f8b32","Type":"ContainerDied","Data":"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b"} Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.917414 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.917425 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xs54z" event={"ID":"1f471f75-4e1e-4093-975b-e02e2b7f8b32","Type":"ContainerDied","Data":"42760d67ce9f12f9032f8328ca7299dc36acbbefde68fef75e1b84c438f1fa22"} Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.917449 5024 scope.go:117] "RemoveContainer" containerID="8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.937186 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.940763 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xs54z"] Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.944707 5024 scope.go:117] "RemoveContainer" containerID="8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b" Oct 07 12:36:20 crc kubenswrapper[5024]: E1007 12:36:20.945581 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b\": container with ID starting with 8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b not found: ID does not exist" containerID="8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b" Oct 07 12:36:20 crc kubenswrapper[5024]: I1007 12:36:20.945627 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b"} err="failed to get container status \"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b\": rpc error: code = NotFound desc = could not find container \"8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b\": container with ID starting with 8e269dbf9814de474ac41940ec760e1e1d38effa1f86d99792d06eacedec7a9b not found: ID does not exist" Oct 07 12:36:22 crc kubenswrapper[5024]: I1007 12:36:22.758778 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" path="/var/lib/kubelet/pods/1f471f75-4e1e-4093-975b-e02e2b7f8b32/volumes" Oct 07 12:37:13 crc kubenswrapper[5024]: I1007 12:37:13.721060 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:37:13 crc kubenswrapper[5024]: I1007 12:37:13.721730 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:37:43 crc kubenswrapper[5024]: I1007 12:37:43.720348 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:37:43 crc kubenswrapper[5024]: I1007 12:37:43.720914 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:38:13 crc kubenswrapper[5024]: I1007 12:38:13.720194 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:38:13 crc kubenswrapper[5024]: I1007 12:38:13.720820 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:38:13 crc kubenswrapper[5024]: I1007 12:38:13.720896 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:38:13 crc kubenswrapper[5024]: I1007 12:38:13.721493 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:38:13 crc kubenswrapper[5024]: I1007 12:38:13.721539 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e" gracePeriod=600 Oct 07 12:38:14 crc kubenswrapper[5024]: I1007 12:38:14.563829 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e" exitCode=0 Oct 07 12:38:14 crc kubenswrapper[5024]: I1007 12:38:14.563889 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e"} Oct 07 12:38:14 crc kubenswrapper[5024]: I1007 12:38:14.564194 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6"} Oct 07 12:38:14 crc kubenswrapper[5024]: I1007 12:38:14.564213 5024 scope.go:117] "RemoveContainer" containerID="192929481f263af5a33e1bbf20fdadd1eb38459de268ceb1b2b6d1edef4716e1" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.071173 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sx9fb"] Oct 07 12:39:14 crc kubenswrapper[5024]: E1007 12:39:14.072111 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" containerName="registry" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.072150 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" containerName="registry" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.072280 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f471f75-4e1e-4093-975b-e02e2b7f8b32" containerName="registry" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.072796 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.074688 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7pdnd" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.074874 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.077748 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.082458 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sx9fb"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.092476 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbtxm"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.093253 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hbtxm" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.097226 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lk8rr" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.111121 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mv7s2"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.111984 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.123757 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbtxm"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.124777 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z2n4j" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.134788 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mv7s2"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.228487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xp4\" (UniqueName: \"kubernetes.io/projected/b7e175c1-168b-4ddc-8139-e7a758af32fb-kube-api-access-26xp4\") pod \"cert-manager-cainjector-7f985d654d-sx9fb\" (UID: \"b7e175c1-168b-4ddc-8139-e7a758af32fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.228535 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvk9\" (UniqueName: \"kubernetes.io/projected/d263f64b-97fc-41bb-9c16-467f65ebe30d-kube-api-access-8xvk9\") pod \"cert-manager-webhook-5655c58dd6-mv7s2\" (UID: \"d263f64b-97fc-41bb-9c16-467f65ebe30d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.228573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6gl\" (UniqueName: \"kubernetes.io/projected/9d627bc4-230f-466a-98af-87483cb62404-kube-api-access-ph6gl\") pod \"cert-manager-5b446d88c5-hbtxm\" (UID: \"9d627bc4-230f-466a-98af-87483cb62404\") " pod="cert-manager/cert-manager-5b446d88c5-hbtxm" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.330130 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xp4\" (UniqueName: \"kubernetes.io/projected/b7e175c1-168b-4ddc-8139-e7a758af32fb-kube-api-access-26xp4\") pod \"cert-manager-cainjector-7f985d654d-sx9fb\" (UID: \"b7e175c1-168b-4ddc-8139-e7a758af32fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.330215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvk9\" (UniqueName: \"kubernetes.io/projected/d263f64b-97fc-41bb-9c16-467f65ebe30d-kube-api-access-8xvk9\") pod \"cert-manager-webhook-5655c58dd6-mv7s2\" (UID: \"d263f64b-97fc-41bb-9c16-467f65ebe30d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.330268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6gl\" (UniqueName: \"kubernetes.io/projected/9d627bc4-230f-466a-98af-87483cb62404-kube-api-access-ph6gl\") pod \"cert-manager-5b446d88c5-hbtxm\" (UID: \"9d627bc4-230f-466a-98af-87483cb62404\") " pod="cert-manager/cert-manager-5b446d88c5-hbtxm" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.351050 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvk9\" (UniqueName: \"kubernetes.io/projected/d263f64b-97fc-41bb-9c16-467f65ebe30d-kube-api-access-8xvk9\") pod \"cert-manager-webhook-5655c58dd6-mv7s2\" (UID: \"d263f64b-97fc-41bb-9c16-467f65ebe30d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.351256 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xp4\" (UniqueName: \"kubernetes.io/projected/b7e175c1-168b-4ddc-8139-e7a758af32fb-kube-api-access-26xp4\") pod \"cert-manager-cainjector-7f985d654d-sx9fb\" (UID: \"b7e175c1-168b-4ddc-8139-e7a758af32fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.353095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6gl\" (UniqueName: \"kubernetes.io/projected/9d627bc4-230f-466a-98af-87483cb62404-kube-api-access-ph6gl\") pod \"cert-manager-5b446d88c5-hbtxm\" (UID: \"9d627bc4-230f-466a-98af-87483cb62404\") " pod="cert-manager/cert-manager-5b446d88c5-hbtxm" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.391092 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.407593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hbtxm" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.428871 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.646162 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbtxm"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.660186 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.684196 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mv7s2"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.819583 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sx9fb"] Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.899828 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" event={"ID":"b7e175c1-168b-4ddc-8139-e7a758af32fb","Type":"ContainerStarted","Data":"0fff17af884d2d18caf2e3d125e8b8f89b20e1ff831a5d690396ddfe2a39200a"} Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.901051 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hbtxm" event={"ID":"9d627bc4-230f-466a-98af-87483cb62404","Type":"ContainerStarted","Data":"721789bd13fdf2ff262f6780014c81a4539d6d34bd1b7ca307fb5662d74fd596"} Oct 07 12:39:14 crc kubenswrapper[5024]: I1007 12:39:14.902448 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" event={"ID":"d263f64b-97fc-41bb-9c16-467f65ebe30d","Type":"ContainerStarted","Data":"cdbc172c4ca4304b66eabe857cb3ea92dfc43570dc1b562e6e3fb48f344ec3ae"} Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.939363 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" event={"ID":"b7e175c1-168b-4ddc-8139-e7a758af32fb","Type":"ContainerStarted","Data":"9c5291e76b475572ffa8fa4b57fd764c4822cc630efdaba10b7e417980d4b1a3"} Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.940463 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hbtxm" event={"ID":"9d627bc4-230f-466a-98af-87483cb62404","Type":"ContainerStarted","Data":"97fdc02a22e1f07c9c8d12652c585134415155bd96a5085f47043207b472d0e6"} Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.942492 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" event={"ID":"d263f64b-97fc-41bb-9c16-467f65ebe30d","Type":"ContainerStarted","Data":"025d169902c348931d9046cb92713696fa98a846df05d8ec1147ac6fb41a4a5c"} Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.943128 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.952804 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-sx9fb" podStartSLOduration=1.8123851420000001 podStartE2EDuration="7.952785675s" podCreationTimestamp="2025-10-07 12:39:14 +0000 UTC" firstStartedPulling="2025-10-07 12:39:14.82880119 +0000 UTC m=+692.904588028" lastFinishedPulling="2025-10-07 12:39:20.969201723 +0000 UTC m=+699.044988561" observedRunningTime="2025-10-07 12:39:21.948669904 +0000 UTC m=+700.024456742" watchObservedRunningTime="2025-10-07 12:39:21.952785675 +0000 UTC m=+700.028572513" Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.964760 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" podStartSLOduration=1.775965115 podStartE2EDuration="7.964742969s" podCreationTimestamp="2025-10-07 12:39:14 +0000 UTC" firstStartedPulling="2025-10-07 12:39:14.698104645 +0000 UTC m=+692.773891483" lastFinishedPulling="2025-10-07 12:39:20.886882499 +0000 UTC m=+698.962669337" observedRunningTime="2025-10-07 12:39:21.961176034 +0000 UTC m=+700.036962882" watchObservedRunningTime="2025-10-07 12:39:21.964742969 +0000 UTC m=+700.040529807" Oct 07 12:39:21 crc kubenswrapper[5024]: I1007 12:39:21.978109 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hbtxm" podStartSLOduration=1.894330097 podStartE2EDuration="7.978091144s" podCreationTimestamp="2025-10-07 12:39:14 +0000 UTC" firstStartedPulling="2025-10-07 12:39:14.659889095 +0000 UTC m=+692.735675933" lastFinishedPulling="2025-10-07 12:39:20.743650142 +0000 UTC m=+698.819436980" observedRunningTime="2025-10-07 12:39:21.977592979 +0000 UTC m=+700.053379807" watchObservedRunningTime="2025-10-07 12:39:21.978091144 +0000 UTC m=+700.053877972" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.789899 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9b4h6"] Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790564 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-controller" containerID="cri-o://21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790677 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="northd" containerID="cri-o://3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790726 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790769 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-node" containerID="cri-o://d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790750 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="nbdb" containerID="cri-o://c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.790841 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-acl-logging" containerID="cri-o://85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.791190 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="sbdb" containerID="cri-o://e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.839676 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" containerID="cri-o://3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" gracePeriod=30 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.963711 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovnkube-controller/3.log" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.966173 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-acl-logging/0.log" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.966682 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-controller/0.log" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967050 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" exitCode=0 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967098 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" exitCode=0 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967105 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" exitCode=0 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967114 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" exitCode=143 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967110 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967210 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967122 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" exitCode=143 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967226 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967237 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967246 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.967263 5024 scope.go:117] "RemoveContainer" containerID="f92c76a71b47cb0ec44854712d380966c02da46ebcd883c9b6a6168adddf2385" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.970471 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/2.log" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.971009 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/1.log" Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.971038 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1ac3df5-bf16-419a-87c5-9683eebe3506" containerID="fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901" exitCode=2 Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.971061 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerDied","Data":"fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901"} Oct 07 12:39:24 crc kubenswrapper[5024]: I1007 12:39:24.971499 5024 scope.go:117] "RemoveContainer" containerID="fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901" Oct 07 12:39:24 crc kubenswrapper[5024]: E1007 12:39:24.972395 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rwxtd_openshift-multus(f1ac3df5-bf16-419a-87c5-9683eebe3506)\"" pod="openshift-multus/multus-rwxtd" podUID="f1ac3df5-bf16-419a-87c5-9683eebe3506" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.015886 5024 scope.go:117] "RemoveContainer" containerID="79a2d929eb82dacb4be37b502ebd1bb31afa797eec7f9365c4c3a05be9154fbe" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.225468 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 is running failed: container process not found" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.225630 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d is running failed: container process not found" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.225951 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d is running failed: container process not found" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.225964 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 is running failed: container process not found" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.226304 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 is running failed: container process not found" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.226349 5024 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="nbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.226357 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d is running failed: container process not found" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.226464 5024 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="sbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.670844 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-acl-logging/0.log" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.671279 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-controller/0.log" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.671669 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.730923 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wpjlz"] Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731181 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="northd" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731200 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="northd" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731210 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="sbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731219 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="sbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731230 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-node" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731238 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-node" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731246 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731253 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731264 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-acl-logging" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731271 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-acl-logging" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731280 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="nbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731286 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="nbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731298 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731305 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731319 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kubecfg-setup" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731327 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kubecfg-setup" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731334 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-ovn-metrics" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731341 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-ovn-metrics" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731353 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731360 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731370 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731377 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731386 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731393 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731513 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-acl-logging" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731527 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovn-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731539 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731546 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731556 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-node" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731566 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="nbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731573 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="sbdb" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731584 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731595 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731606 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="northd" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731618 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="kube-rbac-proxy-ovn-metrics" Oct 07 12:39:25 crc kubenswrapper[5024]: E1007 12:39:25.731717 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731727 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.731843 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerName="ovnkube-controller" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.733701 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833209 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833260 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833304 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833362 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833414 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833458 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash" (OuterVolumeSpecName: "host-slash") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833659 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833766 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833843 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833912 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833949 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833956 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833988 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833987 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.833999 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834012 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834028 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log" (OuterVolumeSpecName: "node-log") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834030 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834060 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834074 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834098 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834110 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834106 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834191 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834224 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834128 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket" (OuterVolumeSpecName: "log-socket") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834250 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834292 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834303 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834295 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834410 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834438 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfsgk\" (UniqueName: \"kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk\") pod \"da5e4e6d-289a-4fc4-9672-2709c87b5258\" (UID: \"da5e4e6d-289a-4fc4-9672-2709c87b5258\") " Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834650 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-var-lib-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834678 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-netd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834717 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.834707 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-systemd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-ovn\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-config\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835591 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-slash\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835650 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835687 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835750 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-node-log\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835792 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-log-socket\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-kubelet\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-env-overrides\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835903 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-netns\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835930 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r57c\" (UniqueName: \"kubernetes.io/projected/271564b8-6672-4208-ab3c-63737bf386fa-kube-api-access-9r57c\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.835989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-script-lib\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836011 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-etc-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836035 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-bin\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836077 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/271564b8-6672-4208-ab3c-63737bf386fa-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836222 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-systemd-units\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836290 5024 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836311 5024 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836327 5024 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-node-log\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836344 5024 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836361 5024 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836379 5024 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836399 5024 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-log-socket\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836412 5024 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836426 5024 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836438 5024 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836450 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836462 5024 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836474 5024 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-slash\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836485 5024 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836497 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836508 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da5e4e6d-289a-4fc4-9672-2709c87b5258-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.836519 5024 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.839816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.840065 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk" (OuterVolumeSpecName: "kube-api-access-jfsgk") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "kube-api-access-jfsgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.847233 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "da5e4e6d-289a-4fc4-9672-2709c87b5258" (UID: "da5e4e6d-289a-4fc4-9672-2709c87b5258"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-slash\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937439 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937464 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-node-log\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937506 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-log-socket\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-kubelet\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937550 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-env-overrides\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937544 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-slash\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937592 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937645 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937647 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-kubelet\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937605 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-netns\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937568 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-run-netns\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937656 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-log-socket\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r57c\" (UniqueName: \"kubernetes.io/projected/271564b8-6672-4208-ab3c-63737bf386fa-kube-api-access-9r57c\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-etc-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937889 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-script-lib\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937919 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-bin\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/271564b8-6672-4208-ab3c-63737bf386fa-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.937984 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-etc-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938025 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-bin\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938030 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-systemd-units\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-systemd-units\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938109 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-var-lib-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938179 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-var-lib-openvswitch\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-netd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938233 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-systemd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938298 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-host-cni-netd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938308 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-ovn\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938339 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-ovn\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-run-systemd\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938383 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-config\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938456 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da5e4e6d-289a-4fc4-9672-2709c87b5258-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-env-overrides\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938474 5024 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/da5e4e6d-289a-4fc4-9672-2709c87b5258-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938531 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfsgk\" (UniqueName: \"kubernetes.io/projected/da5e4e6d-289a-4fc4-9672-2709c87b5258-kube-api-access-jfsgk\") on node \"crc\" DevicePath \"\"" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938799 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/271564b8-6672-4208-ab3c-63737bf386fa-node-log\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938877 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-script-lib\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.938993 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/271564b8-6672-4208-ab3c-63737bf386fa-ovnkube-config\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.941298 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/271564b8-6672-4208-ab3c-63737bf386fa-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.952910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r57c\" (UniqueName: \"kubernetes.io/projected/271564b8-6672-4208-ab3c-63737bf386fa-kube-api-access-9r57c\") pod \"ovnkube-node-wpjlz\" (UID: \"271564b8-6672-4208-ab3c-63737bf386fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.992391 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-acl-logging/0.log" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.992963 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9b4h6_da5e4e6d-289a-4fc4-9672-2709c87b5258/ovn-controller/0.log" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993389 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" exitCode=0 Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993428 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" exitCode=0 Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993437 5024 generic.go:334] "Generic (PLEG): container finished" podID="da5e4e6d-289a-4fc4-9672-2709c87b5258" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" exitCode=0 Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993503 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d"} Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993520 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993533 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405"} Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993548 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a"} Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993560 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9b4h6" event={"ID":"da5e4e6d-289a-4fc4-9672-2709c87b5258","Type":"ContainerDied","Data":"13528f304b54836c3aaf59abaf5d72b6f1d3b65a59e4ed15c829a9c6966c43fa"} Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.993591 5024 scope.go:117] "RemoveContainer" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" Oct 07 12:39:25 crc kubenswrapper[5024]: I1007 12:39:25.995173 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/2.log" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.016094 5024 scope.go:117] "RemoveContainer" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.025395 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9b4h6"] Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.031771 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9b4h6"] Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.035962 5024 scope.go:117] "RemoveContainer" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.049233 5024 scope.go:117] "RemoveContainer" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.049422 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.065628 5024 scope.go:117] "RemoveContainer" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.099262 5024 scope.go:117] "RemoveContainer" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.115489 5024 scope.go:117] "RemoveContainer" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.134245 5024 scope.go:117] "RemoveContainer" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.167106 5024 scope.go:117] "RemoveContainer" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.183541 5024 scope.go:117] "RemoveContainer" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.184076 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": container with ID starting with 3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120 not found: ID does not exist" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184109 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120"} err="failed to get container status \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": rpc error: code = NotFound desc = could not find container \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": container with ID starting with 3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184151 5024 scope.go:117] "RemoveContainer" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.184467 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": container with ID starting with e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d not found: ID does not exist" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184504 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d"} err="failed to get container status \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": rpc error: code = NotFound desc = could not find container \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": container with ID starting with e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184525 5024 scope.go:117] "RemoveContainer" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.184800 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": container with ID starting with c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 not found: ID does not exist" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184828 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405"} err="failed to get container status \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": rpc error: code = NotFound desc = could not find container \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": container with ID starting with c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.184844 5024 scope.go:117] "RemoveContainer" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.185356 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": container with ID starting with 3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a not found: ID does not exist" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.185385 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a"} err="failed to get container status \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": rpc error: code = NotFound desc = could not find container \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": container with ID starting with 3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.185403 5024 scope.go:117] "RemoveContainer" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.185843 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": container with ID starting with 7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0 not found: ID does not exist" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.185900 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0"} err="failed to get container status \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": rpc error: code = NotFound desc = could not find container \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": container with ID starting with 7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.185935 5024 scope.go:117] "RemoveContainer" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.186543 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": container with ID starting with d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5 not found: ID does not exist" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.186581 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5"} err="failed to get container status \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": rpc error: code = NotFound desc = could not find container \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": container with ID starting with d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.186626 5024 scope.go:117] "RemoveContainer" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.186917 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": container with ID starting with 85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a not found: ID does not exist" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.186955 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a"} err="failed to get container status \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": rpc error: code = NotFound desc = could not find container \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": container with ID starting with 85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.186972 5024 scope.go:117] "RemoveContainer" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.187214 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": container with ID starting with 21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c not found: ID does not exist" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.187255 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c"} err="failed to get container status \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": rpc error: code = NotFound desc = could not find container \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": container with ID starting with 21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.187278 5024 scope.go:117] "RemoveContainer" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" Oct 07 12:39:26 crc kubenswrapper[5024]: E1007 12:39:26.187607 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": container with ID starting with 6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b not found: ID does not exist" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.187644 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b"} err="failed to get container status \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": rpc error: code = NotFound desc = could not find container \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": container with ID starting with 6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.187666 5024 scope.go:117] "RemoveContainer" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188005 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120"} err="failed to get container status \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": rpc error: code = NotFound desc = could not find container \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": container with ID starting with 3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188031 5024 scope.go:117] "RemoveContainer" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188308 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d"} err="failed to get container status \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": rpc error: code = NotFound desc = could not find container \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": container with ID starting with e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188335 5024 scope.go:117] "RemoveContainer" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188957 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405"} err="failed to get container status \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": rpc error: code = NotFound desc = could not find container \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": container with ID starting with c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.188983 5024 scope.go:117] "RemoveContainer" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.189338 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a"} err="failed to get container status \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": rpc error: code = NotFound desc = could not find container \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": container with ID starting with 3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.189377 5024 scope.go:117] "RemoveContainer" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.189703 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0"} err="failed to get container status \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": rpc error: code = NotFound desc = could not find container \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": container with ID starting with 7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.189731 5024 scope.go:117] "RemoveContainer" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190076 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5"} err="failed to get container status \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": rpc error: code = NotFound desc = could not find container \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": container with ID starting with d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190102 5024 scope.go:117] "RemoveContainer" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190508 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a"} err="failed to get container status \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": rpc error: code = NotFound desc = could not find container \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": container with ID starting with 85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190537 5024 scope.go:117] "RemoveContainer" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190842 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c"} err="failed to get container status \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": rpc error: code = NotFound desc = could not find container \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": container with ID starting with 21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.190867 5024 scope.go:117] "RemoveContainer" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.191173 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b"} err="failed to get container status \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": rpc error: code = NotFound desc = could not find container \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": container with ID starting with 6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.191200 5024 scope.go:117] "RemoveContainer" containerID="3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.192362 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120"} err="failed to get container status \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": rpc error: code = NotFound desc = could not find container \"3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120\": container with ID starting with 3b9faab32a6036b38d0ac5e68ee940cf5abcbeea2dab82a80da8a1543e26e120 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.192397 5024 scope.go:117] "RemoveContainer" containerID="e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.192790 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d"} err="failed to get container status \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": rpc error: code = NotFound desc = could not find container \"e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d\": container with ID starting with e636c9aefa36168bb29abd8a4c2f9ad4f798c3c26737d57a325eb526b60a2c9d not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.192822 5024 scope.go:117] "RemoveContainer" containerID="c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193164 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405"} err="failed to get container status \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": rpc error: code = NotFound desc = could not find container \"c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405\": container with ID starting with c47f7649d959afd3b5e8c0be25f7b9aa3999eee4fa99b944c41a191520a00405 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193189 5024 scope.go:117] "RemoveContainer" containerID="3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193449 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a"} err="failed to get container status \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": rpc error: code = NotFound desc = could not find container \"3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a\": container with ID starting with 3c47d3ced6537054b0bc208782f58df2962c1db23c3767279566033f5fe7cf1a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193475 5024 scope.go:117] "RemoveContainer" containerID="7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193928 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0"} err="failed to get container status \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": rpc error: code = NotFound desc = could not find container \"7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0\": container with ID starting with 7effe105ed00509c8038e74d8f367f525dd925f4d5078d91593bc606892c00d0 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.193992 5024 scope.go:117] "RemoveContainer" containerID="d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.194333 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5"} err="failed to get container status \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": rpc error: code = NotFound desc = could not find container \"d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5\": container with ID starting with d077889d0a59e4c3586affd24a92ae22d9ef76f58a1695cd28b471dd907c60b5 not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.194363 5024 scope.go:117] "RemoveContainer" containerID="85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.195276 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a"} err="failed to get container status \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": rpc error: code = NotFound desc = could not find container \"85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a\": container with ID starting with 85cb9c9ef1d2bc669bac7d38dadb7f33fc69dd4e9efd6e8ca35b87f902c1bc3a not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.195309 5024 scope.go:117] "RemoveContainer" containerID="21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.195578 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c"} err="failed to get container status \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": rpc error: code = NotFound desc = could not find container \"21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c\": container with ID starting with 21471b7f1aee2abe5f8cb1e25aa63115de158b9b225dda5128301f247e61688c not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.195611 5024 scope.go:117] "RemoveContainer" containerID="6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.195832 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b"} err="failed to get container status \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": rpc error: code = NotFound desc = could not find container \"6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b\": container with ID starting with 6c1850a5835ab9bd999008cf3c44e780556c4ab31945123f0d2b52e2e496688b not found: ID does not exist" Oct 07 12:39:26 crc kubenswrapper[5024]: I1007 12:39:26.760193 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5e4e6d-289a-4fc4-9672-2709c87b5258" path="/var/lib/kubelet/pods/da5e4e6d-289a-4fc4-9672-2709c87b5258/volumes" Oct 07 12:39:27 crc kubenswrapper[5024]: I1007 12:39:27.004644 5024 generic.go:334] "Generic (PLEG): container finished" podID="271564b8-6672-4208-ab3c-63737bf386fa" containerID="aba9e643b0e8e46dd76ab8b61fe46720e184439446d8fe907fa1dad86cd0d520" exitCode=0 Oct 07 12:39:27 crc kubenswrapper[5024]: I1007 12:39:27.004717 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerDied","Data":"aba9e643b0e8e46dd76ab8b61fe46720e184439446d8fe907fa1dad86cd0d520"} Oct 07 12:39:27 crc kubenswrapper[5024]: I1007 12:39:27.004749 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"e699d50d273a4fdb8c253b90127872decc342a854cbe50bac50debd5ee36b3aa"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.015994 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"0ab4c6eeb84cb718d141bb974bedc46896a2155d63f59db4ca39d9779f72ace6"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.016346 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"b02fef6aa576a98cf3a38e87e6dc65bb330474d5c18f271a728f75bd1a728306"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.016357 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"2bcc03180678af378212a4ebedc463635d5f9a49f0d6e00777e13c8c60da14b0"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.016366 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"533184a053a895c2b9c70bf84f05ae2e173f36338a88790582b8e5c5e671788a"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.016374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"ee320b36a55c319004294b1bbd5d3a9a9281385976e5dc8c6b9a242d3b88d5a3"} Oct 07 12:39:28 crc kubenswrapper[5024]: I1007 12:39:28.016383 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"bf83573f570e496b5caed38cf0718a489c377fff514f3bde678b12599dfc48ca"} Oct 07 12:39:29 crc kubenswrapper[5024]: I1007 12:39:29.433797 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-mv7s2" Oct 07 12:39:30 crc kubenswrapper[5024]: I1007 12:39:30.031616 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"2be630738cb128117eb224ffb637d7578bfaeedb5edd04491aee7fb27701793c"} Oct 07 12:39:33 crc kubenswrapper[5024]: I1007 12:39:33.066436 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" event={"ID":"271564b8-6672-4208-ab3c-63737bf386fa","Type":"ContainerStarted","Data":"bed21e181293996ce4507ea0601092cf6bece942d85ad152dda254e160bdf596"} Oct 07 12:39:33 crc kubenswrapper[5024]: I1007 12:39:33.067105 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:33 crc kubenswrapper[5024]: I1007 12:39:33.067203 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:33 crc kubenswrapper[5024]: I1007 12:39:33.099155 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:33 crc kubenswrapper[5024]: I1007 12:39:33.106275 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" podStartSLOduration=8.106258867 podStartE2EDuration="8.106258867s" podCreationTimestamp="2025-10-07 12:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:39:33.104520486 +0000 UTC m=+711.180307324" watchObservedRunningTime="2025-10-07 12:39:33.106258867 +0000 UTC m=+711.182045705" Oct 07 12:39:34 crc kubenswrapper[5024]: I1007 12:39:34.078879 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:34 crc kubenswrapper[5024]: I1007 12:39:34.123323 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:39:40 crc kubenswrapper[5024]: I1007 12:39:40.751069 5024 scope.go:117] "RemoveContainer" containerID="fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901" Oct 07 12:39:40 crc kubenswrapper[5024]: E1007 12:39:40.751635 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rwxtd_openshift-multus(f1ac3df5-bf16-419a-87c5-9683eebe3506)\"" pod="openshift-multus/multus-rwxtd" podUID="f1ac3df5-bf16-419a-87c5-9683eebe3506" Oct 07 12:39:52 crc kubenswrapper[5024]: I1007 12:39:52.754916 5024 scope.go:117] "RemoveContainer" containerID="fde1c0d6da0160a347d332c1d9ec0498a3fb8ef5637318defbb2c5570cb46901" Oct 07 12:39:53 crc kubenswrapper[5024]: I1007 12:39:53.182478 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rwxtd_f1ac3df5-bf16-419a-87c5-9683eebe3506/kube-multus/2.log" Oct 07 12:39:53 crc kubenswrapper[5024]: I1007 12:39:53.182726 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rwxtd" event={"ID":"f1ac3df5-bf16-419a-87c5-9683eebe3506","Type":"ContainerStarted","Data":"26d3fc6e6b631fed8332b9f4ade20c1192b456cdfbe6fd8dfd8bc201376453b2"} Oct 07 12:39:56 crc kubenswrapper[5024]: I1007 12:39:56.072721 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjlz" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.249078 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr"] Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.252988 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.255765 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.267356 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr"] Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.343289 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.343335 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.343395 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7qh\" (UniqueName: \"kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.444209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7qh\" (UniqueName: \"kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.444289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.444325 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.444835 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.444875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.472095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7qh\" (UniqueName: \"kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.588260 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:14 crc kubenswrapper[5024]: I1007 12:40:14.820674 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr"] Oct 07 12:40:15 crc kubenswrapper[5024]: I1007 12:40:15.305219 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerStarted","Data":"f172ea84ab100573cdd3ddd84be91f13557a329b6ecffa7ee009618cee19725b"} Oct 07 12:40:15 crc kubenswrapper[5024]: I1007 12:40:15.305523 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerStarted","Data":"d932e4dd75c0809eb010d54ba9974705492c943e279f0785754cd2dd43ef853a"} Oct 07 12:40:16 crc kubenswrapper[5024]: I1007 12:40:16.313500 5024 generic.go:334] "Generic (PLEG): container finished" podID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerID="f172ea84ab100573cdd3ddd84be91f13557a329b6ecffa7ee009618cee19725b" exitCode=0 Oct 07 12:40:16 crc kubenswrapper[5024]: I1007 12:40:16.313555 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerDied","Data":"f172ea84ab100573cdd3ddd84be91f13557a329b6ecffa7ee009618cee19725b"} Oct 07 12:40:17 crc kubenswrapper[5024]: I1007 12:40:17.170472 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:40:17 crc kubenswrapper[5024]: I1007 12:40:17.170977 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" containerID="cri-o://cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e" gracePeriod=30 Oct 07 12:40:17 crc kubenswrapper[5024]: I1007 12:40:17.274500 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:40:17 crc kubenswrapper[5024]: I1007 12:40:17.274759 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" containerID="cri-o://62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72" gracePeriod=30 Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.179981 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.219091 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.303917 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert\") pod \"f324f3c7-44fa-473c-8b60-ea30be3b7045\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.304360 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert\") pod \"a74a950f-a98b-45c9-bdd0-0cdda261396f\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.304479 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles\") pod \"f324f3c7-44fa-473c-8b60-ea30be3b7045\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.304636 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config\") pod \"f324f3c7-44fa-473c-8b60-ea30be3b7045\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.304742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca\") pod \"f324f3c7-44fa-473c-8b60-ea30be3b7045\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.304870 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5qpn\" (UniqueName: \"kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn\") pod \"a74a950f-a98b-45c9-bdd0-0cdda261396f\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305039 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config\") pod \"a74a950f-a98b-45c9-bdd0-0cdda261396f\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305193 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca\") pod \"a74a950f-a98b-45c9-bdd0-0cdda261396f\" (UID: \"a74a950f-a98b-45c9-bdd0-0cdda261396f\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305313 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbv42\" (UniqueName: \"kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42\") pod \"f324f3c7-44fa-473c-8b60-ea30be3b7045\" (UID: \"f324f3c7-44fa-473c-8b60-ea30be3b7045\") " Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305394 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f324f3c7-44fa-473c-8b60-ea30be3b7045" (UID: "f324f3c7-44fa-473c-8b60-ea30be3b7045"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305615 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca" (OuterVolumeSpecName: "client-ca") pod "f324f3c7-44fa-473c-8b60-ea30be3b7045" (UID: "f324f3c7-44fa-473c-8b60-ea30be3b7045"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305879 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305977 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-client-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.305945 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config" (OuterVolumeSpecName: "config") pod "f324f3c7-44fa-473c-8b60-ea30be3b7045" (UID: "f324f3c7-44fa-473c-8b60-ea30be3b7045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.306404 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config" (OuterVolumeSpecName: "config") pod "a74a950f-a98b-45c9-bdd0-0cdda261396f" (UID: "a74a950f-a98b-45c9-bdd0-0cdda261396f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.306748 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca" (OuterVolumeSpecName: "client-ca") pod "a74a950f-a98b-45c9-bdd0-0cdda261396f" (UID: "a74a950f-a98b-45c9-bdd0-0cdda261396f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.317095 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42" (OuterVolumeSpecName: "kube-api-access-sbv42") pod "f324f3c7-44fa-473c-8b60-ea30be3b7045" (UID: "f324f3c7-44fa-473c-8b60-ea30be3b7045"). InnerVolumeSpecName "kube-api-access-sbv42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.317163 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a74a950f-a98b-45c9-bdd0-0cdda261396f" (UID: "a74a950f-a98b-45c9-bdd0-0cdda261396f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.319541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn" (OuterVolumeSpecName: "kube-api-access-c5qpn") pod "a74a950f-a98b-45c9-bdd0-0cdda261396f" (UID: "a74a950f-a98b-45c9-bdd0-0cdda261396f"). InnerVolumeSpecName "kube-api-access-c5qpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.323778 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f324f3c7-44fa-473c-8b60-ea30be3b7045" (UID: "f324f3c7-44fa-473c-8b60-ea30be3b7045"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.325627 5024 generic.go:334] "Generic (PLEG): container finished" podID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerID="cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e" exitCode=0 Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.325710 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" event={"ID":"f324f3c7-44fa-473c-8b60-ea30be3b7045","Type":"ContainerDied","Data":"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e"} Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.325825 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" event={"ID":"f324f3c7-44fa-473c-8b60-ea30be3b7045","Type":"ContainerDied","Data":"af5845f5793dbacd259bd20894aff6d9962eff4c415e592a03f77ee1a2f8a196"} Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.325863 5024 scope.go:117] "RemoveContainer" containerID="cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.325795 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-25rxs" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.331433 5024 generic.go:334] "Generic (PLEG): container finished" podID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerID="62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72" exitCode=0 Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.331468 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.331502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" event={"ID":"a74a950f-a98b-45c9-bdd0-0cdda261396f","Type":"ContainerDied","Data":"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72"} Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.331527 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5" event={"ID":"a74a950f-a98b-45c9-bdd0-0cdda261396f","Type":"ContainerDied","Data":"969dc18ff4abc57d7d3525a167ffd61e8e24fb42be93fedab8952660fa937db1"} Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.376362 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.379881 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8vpl5"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.390029 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.395226 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-25rxs"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407591 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a74a950f-a98b-45c9-bdd0-0cdda261396f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407633 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f324f3c7-44fa-473c-8b60-ea30be3b7045-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407646 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5qpn\" (UniqueName: \"kubernetes.io/projected/a74a950f-a98b-45c9-bdd0-0cdda261396f-kube-api-access-c5qpn\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407658 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407666 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a74a950f-a98b-45c9-bdd0-0cdda261396f-client-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407675 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbv42\" (UniqueName: \"kubernetes.io/projected/f324f3c7-44fa-473c-8b60-ea30be3b7045-kube-api-access-sbv42\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.407683 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f324f3c7-44fa-473c-8b60-ea30be3b7045-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.529410 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p"] Oct 07 12:40:18 crc kubenswrapper[5024]: E1007 12:40:18.529719 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.529738 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: E1007 12:40:18.529761 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.529768 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.529853 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" containerName="route-controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.529868 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" containerName="controller-manager" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.530282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.534181 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8c74c7df-cp94v"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.534612 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.535072 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.536089 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.536304 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.536658 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.536990 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.538693 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541264 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541664 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541694 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541739 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541674 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.541881 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.552321 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8c74c7df-cp94v"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.553395 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.558495 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p"] Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.609480 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-config\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.609559 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-client-ca\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.609656 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a407dc8-adcc-43db-92bc-aaf0520b8597-serving-cert\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.609772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmkb\" (UniqueName: \"kubernetes.io/projected/6a407dc8-adcc-43db-92bc-aaf0520b8597-kube-api-access-whmkb\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.692805 5024 scope.go:117] "RemoveContainer" containerID="cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e" Oct 07 12:40:18 crc kubenswrapper[5024]: E1007 12:40:18.694298 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e\": container with ID starting with cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e not found: ID does not exist" containerID="cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.694383 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e"} err="failed to get container status \"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e\": rpc error: code = NotFound desc = could not find container \"cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e\": container with ID starting with cdf3d7e9fed648e6864c02bc58d72df761c899657a7bc9eb627789bad4c6e52e not found: ID does not exist" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.694432 5024 scope.go:117] "RemoveContainer" containerID="62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.711761 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-client-ca\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.711853 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rl9\" (UniqueName: \"kubernetes.io/projected/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-kube-api-access-f8rl9\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.711973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmkb\" (UniqueName: \"kubernetes.io/projected/6a407dc8-adcc-43db-92bc-aaf0520b8597-kube-api-access-whmkb\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.712025 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-proxy-ca-bundles\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.712512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-config\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.712742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-client-ca\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.712959 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a407dc8-adcc-43db-92bc-aaf0520b8597-serving-cert\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.713091 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-config\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.713489 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-serving-cert\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.713898 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-config\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.714700 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a407dc8-adcc-43db-92bc-aaf0520b8597-client-ca\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.719899 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a407dc8-adcc-43db-92bc-aaf0520b8597-serving-cert\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.724034 5024 scope.go:117] "RemoveContainer" containerID="62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72" Oct 07 12:40:18 crc kubenswrapper[5024]: E1007 12:40:18.724892 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72\": container with ID starting with 62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72 not found: ID does not exist" containerID="62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.724986 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72"} err="failed to get container status \"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72\": rpc error: code = NotFound desc = could not find container \"62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72\": container with ID starting with 62513c8cbba0d6e60d4e05984c07cd1e32e9b0ed0df234292f364be4c1076f72 not found: ID does not exist" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.742286 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmkb\" (UniqueName: \"kubernetes.io/projected/6a407dc8-adcc-43db-92bc-aaf0520b8597-kube-api-access-whmkb\") pod \"route-controller-manager-dc94d4d99-mpc2p\" (UID: \"6a407dc8-adcc-43db-92bc-aaf0520b8597\") " pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.766217 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74a950f-a98b-45c9-bdd0-0cdda261396f" path="/var/lib/kubelet/pods/a74a950f-a98b-45c9-bdd0-0cdda261396f/volumes" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.767063 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f324f3c7-44fa-473c-8b60-ea30be3b7045" path="/var/lib/kubelet/pods/f324f3c7-44fa-473c-8b60-ea30be3b7045/volumes" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.814265 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-serving-cert\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.814319 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rl9\" (UniqueName: \"kubernetes.io/projected/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-kube-api-access-f8rl9\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.814338 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-client-ca\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.814363 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-proxy-ca-bundles\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.814405 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-config\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.818518 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-config\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.818846 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-proxy-ca-bundles\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.819236 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-client-ca\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.819864 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-serving-cert\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.838946 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rl9\" (UniqueName: \"kubernetes.io/projected/b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3-kube-api-access-f8rl9\") pod \"controller-manager-5c8c74c7df-cp94v\" (UID: \"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3\") " pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.848847 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:18 crc kubenswrapper[5024]: I1007 12:40:18.857853 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.056768 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p"] Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.115609 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8c74c7df-cp94v"] Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.339401 5024 generic.go:334] "Generic (PLEG): container finished" podID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerID="9ef08b7154d449c7405f70aabf1a96399ff1287dddbebac8f4d91d91e66ebaed" exitCode=0 Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.339506 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerDied","Data":"9ef08b7154d449c7405f70aabf1a96399ff1287dddbebac8f4d91d91e66ebaed"} Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.340914 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" event={"ID":"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3","Type":"ContainerStarted","Data":"ada6df8c082e127132d2908c06f8a77711f3a0a271ad2090655aee996c9ae975"} Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.340942 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" event={"ID":"b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3","Type":"ContainerStarted","Data":"5bae9d73375bad1e2a92dad0ef418535164925bf89e73fad687efd523c4d6b91"} Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.341482 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.342595 5024 patch_prober.go:28] interesting pod/controller-manager-5c8c74c7df-cp94v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.342645 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" podUID="b2a0b52a-aa12-4b58-ba68-b45e5f6c65c3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.344922 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" event={"ID":"6a407dc8-adcc-43db-92bc-aaf0520b8597","Type":"ContainerStarted","Data":"ea77397368b7faef8b73227f2061706ff1d3e509eadaa510f886b92d6d241e02"} Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.344967 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" event={"ID":"6a407dc8-adcc-43db-92bc-aaf0520b8597","Type":"ContainerStarted","Data":"c76d582b8b67b642391cdd9c6329c9d82db55b93b46e02767d99e379f930cc13"} Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.344988 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.378464 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" podStartSLOduration=2.378444416 podStartE2EDuration="2.378444416s" podCreationTimestamp="2025-10-07 12:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:40:19.373888222 +0000 UTC m=+757.449675060" watchObservedRunningTime="2025-10-07 12:40:19.378444416 +0000 UTC m=+757.454231254" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.398043 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" podStartSLOduration=2.398000855 podStartE2EDuration="2.398000855s" podCreationTimestamp="2025-10-07 12:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:40:19.392835142 +0000 UTC m=+757.468622000" watchObservedRunningTime="2025-10-07 12:40:19.398000855 +0000 UTC m=+757.473787693" Oct 07 12:40:19 crc kubenswrapper[5024]: I1007 12:40:19.571686 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dc94d4d99-mpc2p" Oct 07 12:40:20 crc kubenswrapper[5024]: I1007 12:40:20.355667 5024 generic.go:334] "Generic (PLEG): container finished" podID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerID="180011a8b1d15ef96aeeb7803a50bf7a6d8f5a0cb4133984c7bf280763b1b486" exitCode=0 Oct 07 12:40:20 crc kubenswrapper[5024]: I1007 12:40:20.356112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerDied","Data":"180011a8b1d15ef96aeeb7803a50bf7a6d8f5a0cb4133984c7bf280763b1b486"} Oct 07 12:40:20 crc kubenswrapper[5024]: I1007 12:40:20.362875 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8c74c7df-cp94v" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.636327 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.750262 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle\") pod \"13571b28-d44b-4e20-8e38-b6577b12fddf\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.750397 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util\") pod \"13571b28-d44b-4e20-8e38-b6577b12fddf\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.750429 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k7qh\" (UniqueName: \"kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh\") pod \"13571b28-d44b-4e20-8e38-b6577b12fddf\" (UID: \"13571b28-d44b-4e20-8e38-b6577b12fddf\") " Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.752216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle" (OuterVolumeSpecName: "bundle") pod "13571b28-d44b-4e20-8e38-b6577b12fddf" (UID: "13571b28-d44b-4e20-8e38-b6577b12fddf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.757050 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh" (OuterVolumeSpecName: "kube-api-access-8k7qh") pod "13571b28-d44b-4e20-8e38-b6577b12fddf" (UID: "13571b28-d44b-4e20-8e38-b6577b12fddf"). InnerVolumeSpecName "kube-api-access-8k7qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.760047 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util" (OuterVolumeSpecName: "util") pod "13571b28-d44b-4e20-8e38-b6577b12fddf" (UID: "13571b28-d44b-4e20-8e38-b6577b12fddf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.862735 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.863378 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13571b28-d44b-4e20-8e38-b6577b12fddf-util\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:21 crc kubenswrapper[5024]: I1007 12:40:21.863406 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k7qh\" (UniqueName: \"kubernetes.io/projected/13571b28-d44b-4e20-8e38-b6577b12fddf-kube-api-access-8k7qh\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.203278 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2bsdt"] Oct 07 12:40:22 crc kubenswrapper[5024]: E1007 12:40:22.203488 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="pull" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.203503 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="pull" Oct 07 12:40:22 crc kubenswrapper[5024]: E1007 12:40:22.203523 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="util" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.203530 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="util" Oct 07 12:40:22 crc kubenswrapper[5024]: E1007 12:40:22.203544 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="extract" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.203551 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="extract" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.203670 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="13571b28-d44b-4e20-8e38-b6577b12fddf" containerName="extract" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.204694 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.213398 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bsdt"] Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.370822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" event={"ID":"13571b28-d44b-4e20-8e38-b6577b12fddf","Type":"ContainerDied","Data":"d932e4dd75c0809eb010d54ba9974705492c943e279f0785754cd2dd43ef853a"} Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.370857 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.370874 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d932e4dd75c0809eb010d54ba9974705492c943e279f0785754cd2dd43ef853a" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.370908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtqg\" (UniqueName: \"kubernetes.io/projected/b28dc7c2-402e-4f40-a836-86485f2bcb36-kube-api-access-gwtqg\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.371283 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-utilities\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.371339 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-catalog-content\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.473054 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-utilities\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.473362 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-catalog-content\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.473474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtqg\" (UniqueName: \"kubernetes.io/projected/b28dc7c2-402e-4f40-a836-86485f2bcb36-kube-api-access-gwtqg\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.473842 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-catalog-content\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.473857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b28dc7c2-402e-4f40-a836-86485f2bcb36-utilities\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.493373 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtqg\" (UniqueName: \"kubernetes.io/projected/b28dc7c2-402e-4f40-a836-86485f2bcb36-kube-api-access-gwtqg\") pod \"redhat-operators-2bsdt\" (UID: \"b28dc7c2-402e-4f40-a836-86485f2bcb36\") " pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.522050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:22 crc kubenswrapper[5024]: I1007 12:40:22.979891 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bsdt"] Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.377649 5024 generic.go:334] "Generic (PLEG): container finished" podID="b28dc7c2-402e-4f40-a836-86485f2bcb36" containerID="58c68d766249e965901730b8e36ca2b37a0bd8992d21b775fe9b4181e68dd190" exitCode=0 Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.377749 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bsdt" event={"ID":"b28dc7c2-402e-4f40-a836-86485f2bcb36","Type":"ContainerDied","Data":"58c68d766249e965901730b8e36ca2b37a0bd8992d21b775fe9b4181e68dd190"} Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.377983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bsdt" event={"ID":"b28dc7c2-402e-4f40-a836-86485f2bcb36","Type":"ContainerStarted","Data":"d0ba8fcb414516b588481e627408d3b87f3416e553d1c2fd820681bab0c5bef0"} Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.723843 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz"] Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.724411 5024 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.725236 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.728174 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.728557 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.730599 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kt7nn" Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.743442 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz"] Oct 07 12:40:23 crc kubenswrapper[5024]: I1007 12:40:23.905959 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42kn\" (UniqueName: \"kubernetes.io/projected/bbb0788e-db6f-48ac-aaab-b61da783d4a1-kube-api-access-s42kn\") pod \"nmstate-operator-858ddd8f98-4f6sz\" (UID: \"bbb0788e-db6f-48ac-aaab-b61da783d4a1\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" Oct 07 12:40:24 crc kubenswrapper[5024]: I1007 12:40:24.008673 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s42kn\" (UniqueName: \"kubernetes.io/projected/bbb0788e-db6f-48ac-aaab-b61da783d4a1-kube-api-access-s42kn\") pod \"nmstate-operator-858ddd8f98-4f6sz\" (UID: \"bbb0788e-db6f-48ac-aaab-b61da783d4a1\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" Oct 07 12:40:24 crc kubenswrapper[5024]: I1007 12:40:24.028006 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s42kn\" (UniqueName: \"kubernetes.io/projected/bbb0788e-db6f-48ac-aaab-b61da783d4a1-kube-api-access-s42kn\") pod \"nmstate-operator-858ddd8f98-4f6sz\" (UID: \"bbb0788e-db6f-48ac-aaab-b61da783d4a1\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" Oct 07 12:40:24 crc kubenswrapper[5024]: I1007 12:40:24.041777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" Oct 07 12:40:24 crc kubenswrapper[5024]: I1007 12:40:24.449694 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz"] Oct 07 12:40:25 crc kubenswrapper[5024]: I1007 12:40:25.394852 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" event={"ID":"bbb0788e-db6f-48ac-aaab-b61da783d4a1","Type":"ContainerStarted","Data":"e11d754643519af2238a2a831d435be71098b0ea5b9339d02f2a81d3fd084741"} Oct 07 12:40:28 crc kubenswrapper[5024]: I1007 12:40:28.422641 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" event={"ID":"bbb0788e-db6f-48ac-aaab-b61da783d4a1","Type":"ContainerStarted","Data":"3c6ed18c2521c313fdb9ea101723e2d6cd68aeeea0f92d5609d8cae316ef24d1"} Oct 07 12:40:28 crc kubenswrapper[5024]: I1007 12:40:28.443229 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-4f6sz" podStartSLOduration=2.539233173 podStartE2EDuration="5.443210627s" podCreationTimestamp="2025-10-07 12:40:23 +0000 UTC" firstStartedPulling="2025-10-07 12:40:24.457451345 +0000 UTC m=+762.533238173" lastFinishedPulling="2025-10-07 12:40:27.361428789 +0000 UTC m=+765.437215627" observedRunningTime="2025-10-07 12:40:28.441263629 +0000 UTC m=+766.517050467" watchObservedRunningTime="2025-10-07 12:40:28.443210627 +0000 UTC m=+766.518997465" Oct 07 12:40:32 crc kubenswrapper[5024]: I1007 12:40:32.448223 5024 generic.go:334] "Generic (PLEG): container finished" podID="b28dc7c2-402e-4f40-a836-86485f2bcb36" containerID="57e06a1ba53131d5236e761075d10a4f63a0e3666462b8825a2e79dcd643edba" exitCode=0 Oct 07 12:40:32 crc kubenswrapper[5024]: I1007 12:40:32.448263 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bsdt" event={"ID":"b28dc7c2-402e-4f40-a836-86485f2bcb36","Type":"ContainerDied","Data":"57e06a1ba53131d5236e761075d10a4f63a0e3666462b8825a2e79dcd643edba"} Oct 07 12:40:33 crc kubenswrapper[5024]: I1007 12:40:33.463817 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bsdt" event={"ID":"b28dc7c2-402e-4f40-a836-86485f2bcb36","Type":"ContainerStarted","Data":"f8788b6562191ed3a0e7b036724b03f7c7471f3564704e057e59a95fdc13d5ec"} Oct 07 12:40:33 crc kubenswrapper[5024]: I1007 12:40:33.478292 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2bsdt" podStartSLOduration=1.88729711 podStartE2EDuration="11.478272105s" podCreationTimestamp="2025-10-07 12:40:22 +0000 UTC" firstStartedPulling="2025-10-07 12:40:23.379354686 +0000 UTC m=+761.455141524" lastFinishedPulling="2025-10-07 12:40:32.970329681 +0000 UTC m=+771.046116519" observedRunningTime="2025-10-07 12:40:33.478087539 +0000 UTC m=+771.553874397" watchObservedRunningTime="2025-10-07 12:40:33.478272105 +0000 UTC m=+771.554058943" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.612859 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.613967 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.616954 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-27ptq" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.627452 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.670298 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-x47wd"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.696903 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.749493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwzn\" (UniqueName: \"kubernetes.io/projected/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-kube-api-access-xkwzn\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.749560 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-nmstate-lock\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.749632 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-ovs-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.749731 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-dbus-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.750063 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.750754 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.753439 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.780832 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-dbus-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwzn\" (UniqueName: \"kubernetes.io/projected/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-kube-api-access-xkwzn\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851666 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851735 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-dbus-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851916 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-nmstate-lock\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.851958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4t8\" (UniqueName: \"kubernetes.io/projected/b466eea2-b593-4881-9b9e-af8b75bdead1-kube-api-access-2j4t8\") pod \"nmstate-metrics-fdff9cb8d-z7l7l\" (UID: \"b466eea2-b593-4881-9b9e-af8b75bdead1\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.852002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-ovs-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.852048 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5vn\" (UniqueName: \"kubernetes.io/projected/b253e4aa-8785-427b-bdf2-d2efa0af3671-kube-api-access-dt5vn\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.852330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-nmstate-lock\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.852727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-ovs-socket\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.864479 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.865120 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.867213 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.867920 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.870120 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6nkgn" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.874452 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwzn\" (UniqueName: \"kubernetes.io/projected/1c8be1f9-a445-4dfd-9ad0-9c8b222e139e-kube-api-access-xkwzn\") pod \"nmstate-handler-x47wd\" (UID: \"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e\") " pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.898564 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq"] Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4t8\" (UniqueName: \"kubernetes.io/projected/b466eea2-b593-4881-9b9e-af8b75bdead1-kube-api-access-2j4t8\") pod \"nmstate-metrics-fdff9cb8d-z7l7l\" (UID: \"b466eea2-b593-4881-9b9e-af8b75bdead1\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953345 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5vn\" (UniqueName: \"kubernetes.io/projected/b253e4aa-8785-427b-bdf2-d2efa0af3671-kube-api-access-dt5vn\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953385 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953409 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvqd\" (UniqueName: \"kubernetes.io/projected/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-kube-api-access-qzvqd\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953546 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.953573 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:34 crc kubenswrapper[5024]: E1007 12:40:34.953694 5024 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 07 12:40:34 crc kubenswrapper[5024]: E1007 12:40:34.953748 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair podName:b253e4aa-8785-427b-bdf2-d2efa0af3671 nodeName:}" failed. No retries permitted until 2025-10-07 12:40:35.453724555 +0000 UTC m=+773.529511393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair") pod "nmstate-webhook-6cdbc54649-szjk6" (UID: "b253e4aa-8785-427b-bdf2-d2efa0af3671") : secret "openshift-nmstate-webhook" not found Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.970806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4t8\" (UniqueName: \"kubernetes.io/projected/b466eea2-b593-4881-9b9e-af8b75bdead1-kube-api-access-2j4t8\") pod \"nmstate-metrics-fdff9cb8d-z7l7l\" (UID: \"b466eea2-b593-4881-9b9e-af8b75bdead1\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" Oct 07 12:40:34 crc kubenswrapper[5024]: I1007 12:40:34.979945 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5vn\" (UniqueName: \"kubernetes.io/projected/b253e4aa-8785-427b-bdf2-d2efa0af3671-kube-api-access-dt5vn\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.054475 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.054542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzvqd\" (UniqueName: \"kubernetes.io/projected/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-kube-api-access-qzvqd\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.054568 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.055685 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.059770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.064112 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d69bfccc8-2zqc7"] Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.064943 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.069436 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.072127 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzvqd\" (UniqueName: \"kubernetes.io/projected/5ff62656-a8c7-4f5e-a6ea-6e3324d284ef-kube-api-access-qzvqd\") pod \"nmstate-console-plugin-6b874cbd85-75bxq\" (UID: \"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.123654 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d69bfccc8-2zqc7"] Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-console-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-trusted-ca-bundle\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-oauth-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156594 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-service-ca\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156652 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156675 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld62x\" (UniqueName: \"kubernetes.io/projected/6d6827fb-6862-4c14-bafc-ee57512b2681-kube-api-access-ld62x\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.156829 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-oauth-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.194898 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.235481 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.257764 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.257859 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld62x\" (UniqueName: \"kubernetes.io/projected/6d6827fb-6862-4c14-bafc-ee57512b2681-kube-api-access-ld62x\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.257924 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-oauth-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.257958 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-console-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.257999 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-trusted-ca-bundle\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.258042 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-oauth-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.258093 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-service-ca\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.259260 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-console-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.259323 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-service-ca\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.259502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-trusted-ca-bundle\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.259545 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d6827fb-6862-4c14-bafc-ee57512b2681-oauth-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.263663 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-oauth-config\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.263690 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d6827fb-6862-4c14-bafc-ee57512b2681-console-serving-cert\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.282781 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld62x\" (UniqueName: \"kubernetes.io/projected/6d6827fb-6862-4c14-bafc-ee57512b2681-kube-api-access-ld62x\") pod \"console-5d69bfccc8-2zqc7\" (UID: \"6d6827fb-6862-4c14-bafc-ee57512b2681\") " pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.405719 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.464419 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.487464 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-x47wd" event={"ID":"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e","Type":"ContainerStarted","Data":"b43e75019e076451e34f6b2cd1a15e1ad224f12a336c8fce20b89d3d2942d827"} Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.487976 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b253e4aa-8785-427b-bdf2-d2efa0af3671-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-szjk6\" (UID: \"b253e4aa-8785-427b-bdf2-d2efa0af3671\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.605369 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq"] Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.691410 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.712281 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l"] Oct 07 12:40:35 crc kubenswrapper[5024]: W1007 12:40:35.718748 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb466eea2_b593_4881_9b9e_af8b75bdead1.slice/crio-d5f43591054c557b06bffd4c1fe063295fd9b1add071244d384583a9e282d628 WatchSource:0}: Error finding container d5f43591054c557b06bffd4c1fe063295fd9b1add071244d384583a9e282d628: Status 404 returned error can't find the container with id d5f43591054c557b06bffd4c1fe063295fd9b1add071244d384583a9e282d628 Oct 07 12:40:35 crc kubenswrapper[5024]: I1007 12:40:35.829349 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d69bfccc8-2zqc7"] Oct 07 12:40:36 crc kubenswrapper[5024]: I1007 12:40:36.093820 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6"] Oct 07 12:40:36 crc kubenswrapper[5024]: W1007 12:40:36.098720 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb253e4aa_8785_427b_bdf2_d2efa0af3671.slice/crio-6aa81403c6cdbfeb8932daa00b3c35609361b6f50ff8b739e5763574398cd29c WatchSource:0}: Error finding container 6aa81403c6cdbfeb8932daa00b3c35609361b6f50ff8b739e5763574398cd29c: Status 404 returned error can't find the container with id 6aa81403c6cdbfeb8932daa00b3c35609361b6f50ff8b739e5763574398cd29c Oct 07 12:40:36 crc kubenswrapper[5024]: I1007 12:40:36.499282 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" event={"ID":"b253e4aa-8785-427b-bdf2-d2efa0af3671","Type":"ContainerStarted","Data":"6aa81403c6cdbfeb8932daa00b3c35609361b6f50ff8b739e5763574398cd29c"} Oct 07 12:40:36 crc kubenswrapper[5024]: I1007 12:40:36.500343 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d69bfccc8-2zqc7" event={"ID":"6d6827fb-6862-4c14-bafc-ee57512b2681","Type":"ContainerStarted","Data":"d889d18483b7830b3b4b2c251f60f3e230d8f7d118d5dd483ae99ba7b33a925c"} Oct 07 12:40:36 crc kubenswrapper[5024]: I1007 12:40:36.501486 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" event={"ID":"b466eea2-b593-4881-9b9e-af8b75bdead1","Type":"ContainerStarted","Data":"d5f43591054c557b06bffd4c1fe063295fd9b1add071244d384583a9e282d628"} Oct 07 12:40:36 crc kubenswrapper[5024]: I1007 12:40:36.502434 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" event={"ID":"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef","Type":"ContainerStarted","Data":"691d7032b08a6670716e77a41dcbc415bc64e8584357b8aba8410014141f5271"} Oct 07 12:40:38 crc kubenswrapper[5024]: I1007 12:40:38.517746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d69bfccc8-2zqc7" event={"ID":"6d6827fb-6862-4c14-bafc-ee57512b2681","Type":"ContainerStarted","Data":"35825b1a889a28aeb4ddb3f5ec78dbfb0002cba7bc0739e9646cbcf937a91b63"} Oct 07 12:40:38 crc kubenswrapper[5024]: I1007 12:40:38.537260 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d69bfccc8-2zqc7" podStartSLOduration=3.537244929 podStartE2EDuration="3.537244929s" podCreationTimestamp="2025-10-07 12:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:40:38.533325443 +0000 UTC m=+776.609112281" watchObservedRunningTime="2025-10-07 12:40:38.537244929 +0000 UTC m=+776.613031767" Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.537336 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-x47wd" event={"ID":"1c8be1f9-a445-4dfd-9ad0-9c8b222e139e","Type":"ContainerStarted","Data":"ec609b9b44c553508c96c1436efaf75cd49a0f3b4ba6f3386b86fe06a0a8d78d"} Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.538211 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.540077 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" event={"ID":"b466eea2-b593-4881-9b9e-af8b75bdead1","Type":"ContainerStarted","Data":"ef33d4b8af406370c71bc9ce08efd0b1993cac1be766f36d216a7cc8b1c54302"} Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.541652 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" event={"ID":"5ff62656-a8c7-4f5e-a6ea-6e3324d284ef","Type":"ContainerStarted","Data":"c5a337b2087782958101d7df6c027a9d12f1c96653ed011dcdbb493e128b5324"} Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.543682 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" event={"ID":"b253e4aa-8785-427b-bdf2-d2efa0af3671","Type":"ContainerStarted","Data":"4c1b43d67927ae17b83268eedf22d023973e19b748f1caa268122d3cf04f661d"} Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.544052 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.553934 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-x47wd" podStartSLOduration=1.481031353 podStartE2EDuration="7.553914256s" podCreationTimestamp="2025-10-07 12:40:34 +0000 UTC" firstStartedPulling="2025-10-07 12:40:35.103191496 +0000 UTC m=+773.178978334" lastFinishedPulling="2025-10-07 12:40:41.176074399 +0000 UTC m=+779.251861237" observedRunningTime="2025-10-07 12:40:41.552722241 +0000 UTC m=+779.628509079" watchObservedRunningTime="2025-10-07 12:40:41.553914256 +0000 UTC m=+779.629701094" Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.568449 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-75bxq" podStartSLOduration=2.013480418 podStartE2EDuration="7.568436251s" podCreationTimestamp="2025-10-07 12:40:34 +0000 UTC" firstStartedPulling="2025-10-07 12:40:35.623534807 +0000 UTC m=+773.699321645" lastFinishedPulling="2025-10-07 12:40:41.17849064 +0000 UTC m=+779.254277478" observedRunningTime="2025-10-07 12:40:41.568066701 +0000 UTC m=+779.643853539" watchObservedRunningTime="2025-10-07 12:40:41.568436251 +0000 UTC m=+779.644223089" Oct 07 12:40:41 crc kubenswrapper[5024]: I1007 12:40:41.598668 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" podStartSLOduration=2.500206212 podStartE2EDuration="7.598653227s" podCreationTimestamp="2025-10-07 12:40:34 +0000 UTC" firstStartedPulling="2025-10-07 12:40:36.100648099 +0000 UTC m=+774.176434937" lastFinishedPulling="2025-10-07 12:40:41.199095114 +0000 UTC m=+779.274881952" observedRunningTime="2025-10-07 12:40:41.595721791 +0000 UTC m=+779.671508619" watchObservedRunningTime="2025-10-07 12:40:41.598653227 +0000 UTC m=+779.674440065" Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.522623 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.522991 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.563405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.600123 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2bsdt" Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.670057 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bsdt"] Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.795019 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:40:42 crc kubenswrapper[5024]: I1007 12:40:42.795300 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7cxfk" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="registry-server" containerID="cri-o://fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972" gracePeriod=2 Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.292131 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.468655 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities\") pod \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.468714 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content\") pod \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.468756 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzp88\" (UniqueName: \"kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88\") pod \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\" (UID: \"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564\") " Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.469951 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities" (OuterVolumeSpecName: "utilities") pod "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" (UID: "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.487999 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88" (OuterVolumeSpecName: "kube-api-access-xzp88") pod "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" (UID: "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564"). InnerVolumeSpecName "kube-api-access-xzp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.564975 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerID="fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972" exitCode=0 Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.565914 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cxfk" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.566336 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerDied","Data":"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972"} Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.566410 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cxfk" event={"ID":"8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564","Type":"ContainerDied","Data":"8e93565af4bebcade4591b5e0cbe2a1884c2b959330459f8cceeae06c42822b0"} Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.566429 5024 scope.go:117] "RemoveContainer" containerID="fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.570552 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.570575 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzp88\" (UniqueName: \"kubernetes.io/projected/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-kube-api-access-xzp88\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.571973 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" (UID: "8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.591782 5024 scope.go:117] "RemoveContainer" containerID="3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.610660 5024 scope.go:117] "RemoveContainer" containerID="be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.630812 5024 scope.go:117] "RemoveContainer" containerID="fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972" Oct 07 12:40:43 crc kubenswrapper[5024]: E1007 12:40:43.632968 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972\": container with ID starting with fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972 not found: ID does not exist" containerID="fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.633017 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972"} err="failed to get container status \"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972\": rpc error: code = NotFound desc = could not find container \"fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972\": container with ID starting with fe84a3deb9b060e2722004b8ebb6affa9b410ac9a78b900ebcc4eb2fc0f8f972 not found: ID does not exist" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.633052 5024 scope.go:117] "RemoveContainer" containerID="3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783" Oct 07 12:40:43 crc kubenswrapper[5024]: E1007 12:40:43.633380 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783\": container with ID starting with 3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783 not found: ID does not exist" containerID="3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.633404 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783"} err="failed to get container status \"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783\": rpc error: code = NotFound desc = could not find container \"3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783\": container with ID starting with 3056157652b4942f4fed8b79003afd166aa7b4fbc0f61185fcfe34644befa783 not found: ID does not exist" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.633421 5024 scope.go:117] "RemoveContainer" containerID="be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15" Oct 07 12:40:43 crc kubenswrapper[5024]: E1007 12:40:43.633778 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15\": container with ID starting with be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15 not found: ID does not exist" containerID="be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.633803 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15"} err="failed to get container status \"be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15\": rpc error: code = NotFound desc = could not find container \"be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15\": container with ID starting with be35d4b655b24cc37aa260f8eea39e1cbafdae3a3b85b8d11bb2efdd82b06f15 not found: ID does not exist" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.672535 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.720817 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.720891 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.894522 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:40:43 crc kubenswrapper[5024]: I1007 12:40:43.901680 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7cxfk"] Oct 07 12:40:44 crc kubenswrapper[5024]: I1007 12:40:44.758737 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" path="/var/lib/kubelet/pods/8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564/volumes" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.405940 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.406339 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.410595 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.578262 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" event={"ID":"b466eea2-b593-4881-9b9e-af8b75bdead1","Type":"ContainerStarted","Data":"e6dd4639c59f0606f4e6d9adce2f60cae3e7b024744b0d2f43da55132da41d87"} Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.581432 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d69bfccc8-2zqc7" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.595818 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-z7l7l" podStartSLOduration=2.857391674 podStartE2EDuration="11.595800864s" podCreationTimestamp="2025-10-07 12:40:34 +0000 UTC" firstStartedPulling="2025-10-07 12:40:35.723850104 +0000 UTC m=+773.799636942" lastFinishedPulling="2025-10-07 12:40:44.462259294 +0000 UTC m=+782.538046132" observedRunningTime="2025-10-07 12:40:45.591753295 +0000 UTC m=+783.667540133" watchObservedRunningTime="2025-10-07 12:40:45.595800864 +0000 UTC m=+783.671587702" Oct 07 12:40:45 crc kubenswrapper[5024]: I1007 12:40:45.644660 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:40:50 crc kubenswrapper[5024]: I1007 12:40:50.105397 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-x47wd" Oct 07 12:40:55 crc kubenswrapper[5024]: I1007 12:40:55.697097 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-szjk6" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.716837 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:00 crc kubenswrapper[5024]: E1007 12:41:00.717382 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="extract-content" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.717396 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="extract-content" Oct 07 12:41:00 crc kubenswrapper[5024]: E1007 12:41:00.717419 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="extract-utilities" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.717427 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="extract-utilities" Oct 07 12:41:00 crc kubenswrapper[5024]: E1007 12:41:00.717442 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="registry-server" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.717452 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="registry-server" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.717590 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd3bcc6-e851-49ed-8b52-ed1ebfcf5564" containerName="registry-server" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.720900 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.727541 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.898756 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.898814 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pn9\" (UniqueName: \"kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:00 crc kubenswrapper[5024]: I1007 12:41:00.898865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:00.999965 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.000069 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pn9\" (UniqueName: \"kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.000118 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.000788 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.001028 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.034780 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pn9\" (UniqueName: \"kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9\") pod \"certified-operators-hc46j\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.056894 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.568851 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:01 crc kubenswrapper[5024]: W1007 12:41:01.578893 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7f38f0e_b8de_4bc5_a5e4_78edde8fee7c.slice/crio-840d56b7b62dd4924b46488c74adb0a263e720546da013d71059bccb17bb3edd WatchSource:0}: Error finding container 840d56b7b62dd4924b46488c74adb0a263e720546da013d71059bccb17bb3edd: Status 404 returned error can't find the container with id 840d56b7b62dd4924b46488c74adb0a263e720546da013d71059bccb17bb3edd Oct 07 12:41:01 crc kubenswrapper[5024]: I1007 12:41:01.683416 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerStarted","Data":"840d56b7b62dd4924b46488c74adb0a263e720546da013d71059bccb17bb3edd"} Oct 07 12:41:03 crc kubenswrapper[5024]: I1007 12:41:03.711088 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerStarted","Data":"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd"} Oct 07 12:41:04 crc kubenswrapper[5024]: I1007 12:41:04.719867 5024 generic.go:334] "Generic (PLEG): container finished" podID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerID="fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd" exitCode=0 Oct 07 12:41:04 crc kubenswrapper[5024]: I1007 12:41:04.720211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerDied","Data":"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd"} Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.104340 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.106358 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.113997 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.176891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.177198 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrjn\" (UniqueName: \"kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.177331 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.278343 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.278398 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrjn\" (UniqueName: \"kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.278443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.279050 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.279043 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.315346 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrjn\" (UniqueName: \"kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn\") pod \"community-operators-58v7p\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.482504 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.733670 5024 generic.go:334] "Generic (PLEG): container finished" podID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerID="244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89" exitCode=0 Oct 07 12:41:06 crc kubenswrapper[5024]: I1007 12:41:06.733732 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerDied","Data":"244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89"} Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.020950 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.747545 5024 generic.go:334] "Generic (PLEG): container finished" podID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerID="93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01" exitCode=0 Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.748347 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerDied","Data":"93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01"} Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.748375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerStarted","Data":"eac434516d4170c714ee2a6d041fbe4be975390d3dcdedacd015a0b80a0e69d3"} Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.752618 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerStarted","Data":"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9"} Oct 07 12:41:07 crc kubenswrapper[5024]: I1007 12:41:07.792542 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hc46j" podStartSLOduration=5.385883472 podStartE2EDuration="7.792522952s" podCreationTimestamp="2025-10-07 12:41:00 +0000 UTC" firstStartedPulling="2025-10-07 12:41:04.722631728 +0000 UTC m=+802.798418566" lastFinishedPulling="2025-10-07 12:41:07.129271158 +0000 UTC m=+805.205058046" observedRunningTime="2025-10-07 12:41:07.790182133 +0000 UTC m=+805.865968991" watchObservedRunningTime="2025-10-07 12:41:07.792522952 +0000 UTC m=+805.868309790" Oct 07 12:41:09 crc kubenswrapper[5024]: I1007 12:41:09.766496 5024 generic.go:334] "Generic (PLEG): container finished" podID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerID="07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b" exitCode=0 Oct 07 12:41:09 crc kubenswrapper[5024]: I1007 12:41:09.767260 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerDied","Data":"07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b"} Oct 07 12:41:10 crc kubenswrapper[5024]: I1007 12:41:10.687373 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9t72d" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" containerID="cri-o://fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8" gracePeriod=15 Oct 07 12:41:10 crc kubenswrapper[5024]: I1007 12:41:10.777025 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerStarted","Data":"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a"} Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.057486 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.057995 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.109857 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.133668 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-58v7p" podStartSLOduration=2.5044775919999998 podStartE2EDuration="5.133628856s" podCreationTimestamp="2025-10-07 12:41:06 +0000 UTC" firstStartedPulling="2025-10-07 12:41:07.750822449 +0000 UTC m=+805.826609297" lastFinishedPulling="2025-10-07 12:41:10.379973723 +0000 UTC m=+808.455760561" observedRunningTime="2025-10-07 12:41:10.804946101 +0000 UTC m=+808.880732939" watchObservedRunningTime="2025-10-07 12:41:11.133628856 +0000 UTC m=+809.209415704" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.136015 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9t72d_f631c93e-2066-410d-bfcb-232ee1cced2a/console/0.log" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.136095 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.254822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.254889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.254920 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjjc\" (UniqueName: \"kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.254989 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.255029 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.255080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.255109 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert\") pod \"f631c93e-2066-410d-bfcb-232ee1cced2a\" (UID: \"f631c93e-2066-410d-bfcb-232ee1cced2a\") " Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.256157 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.256462 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca" (OuterVolumeSpecName: "service-ca") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.256811 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.257129 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config" (OuterVolumeSpecName: "console-config") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.263612 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.263927 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc" (OuterVolumeSpecName: "kube-api-access-bxjjc") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "kube-api-access-bxjjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.264342 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f631c93e-2066-410d-bfcb-232ee1cced2a" (UID: "f631c93e-2066-410d-bfcb-232ee1cced2a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.355294 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks"] Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356214 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-service-ca\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356305 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: E1007 12:41:11.356404 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356426 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356544 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerName="console" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356405 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-console-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356686 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f631c93e-2066-410d-bfcb-232ee1cced2a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356739 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356796 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f631c93e-2066-410d-bfcb-232ee1cced2a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.356859 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjjc\" (UniqueName: \"kubernetes.io/projected/f631c93e-2066-410d-bfcb-232ee1cced2a-kube-api-access-bxjjc\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.357398 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.359453 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.370887 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks"] Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.457454 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.457740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.457869 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ccw\" (UniqueName: \"kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.559601 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9ccw\" (UniqueName: \"kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.559670 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.559726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.560706 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.560743 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.580875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9ccw\" (UniqueName: \"kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.670696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.801446 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9t72d_f631c93e-2066-410d-bfcb-232ee1cced2a/console/0.log" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.801518 5024 generic.go:334] "Generic (PLEG): container finished" podID="f631c93e-2066-410d-bfcb-232ee1cced2a" containerID="fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8" exitCode=2 Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.802802 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9t72d" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.807693 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9t72d" event={"ID":"f631c93e-2066-410d-bfcb-232ee1cced2a","Type":"ContainerDied","Data":"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8"} Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.807788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9t72d" event={"ID":"f631c93e-2066-410d-bfcb-232ee1cced2a","Type":"ContainerDied","Data":"1a6152534f14a018a6fbb101e18719e2c0c1d18db504037d44317d595d257a0f"} Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.807814 5024 scope.go:117] "RemoveContainer" containerID="fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.844903 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.851698 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9t72d"] Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.863043 5024 scope.go:117] "RemoveContainer" containerID="fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8" Oct 07 12:41:11 crc kubenswrapper[5024]: E1007 12:41:11.864064 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8\": container with ID starting with fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8 not found: ID does not exist" containerID="fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8" Oct 07 12:41:11 crc kubenswrapper[5024]: I1007 12:41:11.864105 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8"} err="failed to get container status \"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8\": rpc error: code = NotFound desc = could not find container \"fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8\": container with ID starting with fbe13cf83795b5253065ad073bffb3684a4ee29d5e554160d6f36c2d2fe5a4a8 not found: ID does not exist" Oct 07 12:41:12 crc kubenswrapper[5024]: I1007 12:41:12.120570 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks"] Oct 07 12:41:12 crc kubenswrapper[5024]: W1007 12:41:12.126949 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38286d5e_c3d8_49a7_89f9_88e4ba1ed331.slice/crio-96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963 WatchSource:0}: Error finding container 96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963: Status 404 returned error can't find the container with id 96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963 Oct 07 12:41:12 crc kubenswrapper[5024]: I1007 12:41:12.757358 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f631c93e-2066-410d-bfcb-232ee1cced2a" path="/var/lib/kubelet/pods/f631c93e-2066-410d-bfcb-232ee1cced2a/volumes" Oct 07 12:41:12 crc kubenswrapper[5024]: I1007 12:41:12.821521 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" event={"ID":"38286d5e-c3d8-49a7-89f9-88e4ba1ed331","Type":"ContainerStarted","Data":"96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963"} Oct 07 12:41:13 crc kubenswrapper[5024]: I1007 12:41:13.721130 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:41:13 crc kubenswrapper[5024]: I1007 12:41:13.721583 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:41:13 crc kubenswrapper[5024]: I1007 12:41:13.831784 5024 generic.go:334] "Generic (PLEG): container finished" podID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerID="6bc9b710add98b347f1a98fcf98cfc5f05adfa963c9b9678b0d930e2e629aae9" exitCode=0 Oct 07 12:41:13 crc kubenswrapper[5024]: I1007 12:41:13.831845 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" event={"ID":"38286d5e-c3d8-49a7-89f9-88e4ba1ed331","Type":"ContainerDied","Data":"6bc9b710add98b347f1a98fcf98cfc5f05adfa963c9b9678b0d930e2e629aae9"} Oct 07 12:41:15 crc kubenswrapper[5024]: I1007 12:41:15.845748 5024 generic.go:334] "Generic (PLEG): container finished" podID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerID="24302fe4a8cc742274737da5a4dc62893ef301c90c88170154b9771800cb6afa" exitCode=0 Oct 07 12:41:15 crc kubenswrapper[5024]: I1007 12:41:15.845820 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" event={"ID":"38286d5e-c3d8-49a7-89f9-88e4ba1ed331","Type":"ContainerDied","Data":"24302fe4a8cc742274737da5a4dc62893ef301c90c88170154b9771800cb6afa"} Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.483396 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.483449 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.521860 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.853075 5024 generic.go:334] "Generic (PLEG): container finished" podID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerID="23ee9f6abf817ac05706fcca499183276a210f19536ea7ee23cd60496411a622" exitCode=0 Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.854015 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" event={"ID":"38286d5e-c3d8-49a7-89f9-88e4ba1ed331","Type":"ContainerDied","Data":"23ee9f6abf817ac05706fcca499183276a210f19536ea7ee23cd60496411a622"} Oct 07 12:41:16 crc kubenswrapper[5024]: I1007 12:41:16.896497 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.170423 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.248994 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle\") pod \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.249035 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ccw\" (UniqueName: \"kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw\") pod \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.249060 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util\") pod \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\" (UID: \"38286d5e-c3d8-49a7-89f9-88e4ba1ed331\") " Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.251647 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle" (OuterVolumeSpecName: "bundle") pod "38286d5e-c3d8-49a7-89f9-88e4ba1ed331" (UID: "38286d5e-c3d8-49a7-89f9-88e4ba1ed331"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.260555 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw" (OuterVolumeSpecName: "kube-api-access-n9ccw") pod "38286d5e-c3d8-49a7-89f9-88e4ba1ed331" (UID: "38286d5e-c3d8-49a7-89f9-88e4ba1ed331"). InnerVolumeSpecName "kube-api-access-n9ccw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.262683 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util" (OuterVolumeSpecName: "util") pod "38286d5e-c3d8-49a7-89f9-88e4ba1ed331" (UID: "38286d5e-c3d8-49a7-89f9-88e4ba1ed331"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.351188 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9ccw\" (UniqueName: \"kubernetes.io/projected/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-kube-api-access-n9ccw\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.351273 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.351284 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38286d5e-c3d8-49a7-89f9-88e4ba1ed331-util\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.873697 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" event={"ID":"38286d5e-c3d8-49a7-89f9-88e4ba1ed331","Type":"ContainerDied","Data":"96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963"} Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.874271 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96830ef6a50e5ae08151ac0fb1b33132a1b3fda4e3b564314caf474df9873963" Oct 07 12:41:18 crc kubenswrapper[5024]: I1007 12:41:18.873822 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks" Oct 07 12:41:19 crc kubenswrapper[5024]: I1007 12:41:19.889772 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:19 crc kubenswrapper[5024]: I1007 12:41:19.890027 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-58v7p" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="registry-server" containerID="cri-o://142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a" gracePeriod=2 Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.324448 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.387172 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content\") pod \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.387379 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrjn\" (UniqueName: \"kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn\") pod \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.387462 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities\") pod \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\" (UID: \"a0b2cf9c-9454-4240-aa8e-370cda42f4c4\") " Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.389388 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities" (OuterVolumeSpecName: "utilities") pod "a0b2cf9c-9454-4240-aa8e-370cda42f4c4" (UID: "a0b2cf9c-9454-4240-aa8e-370cda42f4c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.393828 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn" (OuterVolumeSpecName: "kube-api-access-hdrjn") pod "a0b2cf9c-9454-4240-aa8e-370cda42f4c4" (UID: "a0b2cf9c-9454-4240-aa8e-370cda42f4c4"). InnerVolumeSpecName "kube-api-access-hdrjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.488699 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrjn\" (UniqueName: \"kubernetes.io/projected/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-kube-api-access-hdrjn\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.488745 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.672040 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0b2cf9c-9454-4240-aa8e-370cda42f4c4" (UID: "a0b2cf9c-9454-4240-aa8e-370cda42f4c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.691354 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b2cf9c-9454-4240-aa8e-370cda42f4c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.887597 5024 generic.go:334] "Generic (PLEG): container finished" podID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerID="142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a" exitCode=0 Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.887670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerDied","Data":"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a"} Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.887678 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58v7p" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.887718 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58v7p" event={"ID":"a0b2cf9c-9454-4240-aa8e-370cda42f4c4","Type":"ContainerDied","Data":"eac434516d4170c714ee2a6d041fbe4be975390d3dcdedacd015a0b80a0e69d3"} Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.887746 5024 scope.go:117] "RemoveContainer" containerID="142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.904519 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.908908 5024 scope.go:117] "RemoveContainer" containerID="07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.909246 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-58v7p"] Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.930428 5024 scope.go:117] "RemoveContainer" containerID="93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.946689 5024 scope.go:117] "RemoveContainer" containerID="142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a" Oct 07 12:41:20 crc kubenswrapper[5024]: E1007 12:41:20.947202 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a\": container with ID starting with 142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a not found: ID does not exist" containerID="142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.947247 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a"} err="failed to get container status \"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a\": rpc error: code = NotFound desc = could not find container \"142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a\": container with ID starting with 142a9e41e2c68063e4fa94ac3ad5ab3fd612cd279486cfef6689eb186f2f3e2a not found: ID does not exist" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.947276 5024 scope.go:117] "RemoveContainer" containerID="07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b" Oct 07 12:41:20 crc kubenswrapper[5024]: E1007 12:41:20.947736 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b\": container with ID starting with 07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b not found: ID does not exist" containerID="07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.947803 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b"} err="failed to get container status \"07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b\": rpc error: code = NotFound desc = could not find container \"07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b\": container with ID starting with 07a954d743dbb071df4b1be061e0623bf1bb71e0bb7880b3a864290690081b2b not found: ID does not exist" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.947843 5024 scope.go:117] "RemoveContainer" containerID="93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01" Oct 07 12:41:20 crc kubenswrapper[5024]: E1007 12:41:20.948463 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01\": container with ID starting with 93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01 not found: ID does not exist" containerID="93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01" Oct 07 12:41:20 crc kubenswrapper[5024]: I1007 12:41:20.948507 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01"} err="failed to get container status \"93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01\": rpc error: code = NotFound desc = could not find container \"93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01\": container with ID starting with 93c383eb1637d2c348beac4f9b2081378b5adc316b0dcbdac1f9fa91a0ac4b01 not found: ID does not exist" Oct 07 12:41:21 crc kubenswrapper[5024]: I1007 12:41:21.098472 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:22 crc kubenswrapper[5024]: I1007 12:41:22.759302 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" path="/var/lib/kubelet/pods/a0b2cf9c-9454-4240-aa8e-370cda42f4c4/volumes" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.288724 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.288955 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hc46j" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="registry-server" containerID="cri-o://7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9" gracePeriod=2 Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.683706 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.736839 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content\") pod \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.736918 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6pn9\" (UniqueName: \"kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9\") pod \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.736945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities\") pod \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\" (UID: \"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c\") " Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.737952 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities" (OuterVolumeSpecName: "utilities") pod "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" (UID: "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.742978 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9" (OuterVolumeSpecName: "kube-api-access-p6pn9") pod "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" (UID: "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c"). InnerVolumeSpecName "kube-api-access-p6pn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.795285 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" (UID: "f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.838810 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.838851 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6pn9\" (UniqueName: \"kubernetes.io/projected/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-kube-api-access-p6pn9\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.838868 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.922032 5024 generic.go:334] "Generic (PLEG): container finished" podID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerID="7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9" exitCode=0 Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.922073 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerDied","Data":"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9"} Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.922098 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc46j" event={"ID":"f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c","Type":"ContainerDied","Data":"840d56b7b62dd4924b46488c74adb0a263e720546da013d71059bccb17bb3edd"} Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.922115 5024 scope.go:117] "RemoveContainer" containerID="7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.922245 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc46j" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.952450 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.958676 5024 scope.go:117] "RemoveContainer" containerID="244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.961692 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hc46j"] Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.975326 5024 scope.go:117] "RemoveContainer" containerID="fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.999421 5024 scope.go:117] "RemoveContainer" containerID="7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9" Oct 07 12:41:24 crc kubenswrapper[5024]: E1007 12:41:24.999805 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9\": container with ID starting with 7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9 not found: ID does not exist" containerID="7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.999843 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9"} err="failed to get container status \"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9\": rpc error: code = NotFound desc = could not find container \"7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9\": container with ID starting with 7e7361259a88db6af4d72728e1f46005414444572ab2d41eea060f1a9495d4b9 not found: ID does not exist" Oct 07 12:41:24 crc kubenswrapper[5024]: I1007 12:41:24.999868 5024 scope.go:117] "RemoveContainer" containerID="244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89" Oct 07 12:41:25 crc kubenswrapper[5024]: E1007 12:41:25.000307 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89\": container with ID starting with 244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89 not found: ID does not exist" containerID="244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89" Oct 07 12:41:25 crc kubenswrapper[5024]: I1007 12:41:25.000339 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89"} err="failed to get container status \"244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89\": rpc error: code = NotFound desc = could not find container \"244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89\": container with ID starting with 244623ea13d6b7a488f8cff72af9cb0385669f4de5a8c99763dd8ae8974dbc89 not found: ID does not exist" Oct 07 12:41:25 crc kubenswrapper[5024]: I1007 12:41:25.000358 5024 scope.go:117] "RemoveContainer" containerID="fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd" Oct 07 12:41:25 crc kubenswrapper[5024]: E1007 12:41:25.000645 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd\": container with ID starting with fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd not found: ID does not exist" containerID="fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd" Oct 07 12:41:25 crc kubenswrapper[5024]: I1007 12:41:25.000683 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd"} err="failed to get container status \"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd\": rpc error: code = NotFound desc = could not find container \"fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd\": container with ID starting with fe8d163c9ba2baa6d0e87e160784209dac3d259fa3d19b7a9f3877972de0c2fd not found: ID does not exist" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.514661 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549"] Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.514937 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="extract-content" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.514951 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="extract-content" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.514968 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.514976 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.514988 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="extract-utilities" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.514997 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="extract-utilities" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515009 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="extract" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515017 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="extract" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515026 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="util" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515034 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="util" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515045 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515053 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515062 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="extract-utilities" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515072 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="extract-utilities" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515084 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="extract-content" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515092 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="extract-content" Oct 07 12:41:26 crc kubenswrapper[5024]: E1007 12:41:26.515103 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="pull" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515110 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="pull" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515238 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b2cf9c-9454-4240-aa8e-370cda42f4c4" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515255 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" containerName="registry-server" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515266 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="38286d5e-c3d8-49a7-89f9-88e4ba1ed331" containerName="extract" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.515707 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.518061 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.518779 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.518779 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.520160 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fw7gx" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.521453 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.530213 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549"] Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.560488 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvtv\" (UniqueName: \"kubernetes.io/projected/675166a8-2c18-4526-bb5f-84ef53f3fcd8-kube-api-access-crvtv\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.560561 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-apiservice-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.560664 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-webhook-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.661851 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvtv\" (UniqueName: \"kubernetes.io/projected/675166a8-2c18-4526-bb5f-84ef53f3fcd8-kube-api-access-crvtv\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.661915 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-apiservice-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.662004 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-webhook-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.668150 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-webhook-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.677047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/675166a8-2c18-4526-bb5f-84ef53f3fcd8-apiservice-cert\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.684901 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvtv\" (UniqueName: \"kubernetes.io/projected/675166a8-2c18-4526-bb5f-84ef53f3fcd8-kube-api-access-crvtv\") pod \"metallb-operator-controller-manager-664cbcfb76-9l549\" (UID: \"675166a8-2c18-4526-bb5f-84ef53f3fcd8\") " pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.760381 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c" path="/var/lib/kubelet/pods/f7f38f0e-b8de-4bc5-a5e4-78edde8fee7c/volumes" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.761462 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-67c46766d-n845m"] Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.762795 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.764781 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dp8gs" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.764781 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.765721 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.835865 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.852961 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67c46766d-n845m"] Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.865727 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9tz\" (UniqueName: \"kubernetes.io/projected/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-kube-api-access-5n9tz\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.866040 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-webhook-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.866167 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-apiservice-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.968361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9tz\" (UniqueName: \"kubernetes.io/projected/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-kube-api-access-5n9tz\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.968463 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-webhook-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.968516 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-apiservice-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.973159 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-apiservice-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.975271 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-webhook-cert\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:26 crc kubenswrapper[5024]: I1007 12:41:26.990643 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9tz\" (UniqueName: \"kubernetes.io/projected/135eccc1-c6b9-42f2-83f7-dd54c18c2ffc-kube-api-access-5n9tz\") pod \"metallb-operator-webhook-server-67c46766d-n845m\" (UID: \"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc\") " pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:27 crc kubenswrapper[5024]: I1007 12:41:27.078466 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:27 crc kubenswrapper[5024]: I1007 12:41:27.306916 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67c46766d-n845m"] Oct 07 12:41:27 crc kubenswrapper[5024]: I1007 12:41:27.319848 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549"] Oct 07 12:41:27 crc kubenswrapper[5024]: I1007 12:41:27.942441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" event={"ID":"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc","Type":"ContainerStarted","Data":"eeb691d936a7b485d0ce9ee59ec1ef53a43ca0deaa2f9e1e3704357337eb64af"} Oct 07 12:41:27 crc kubenswrapper[5024]: I1007 12:41:27.943265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" event={"ID":"675166a8-2c18-4526-bb5f-84ef53f3fcd8","Type":"ContainerStarted","Data":"06c228e9fdf3de3eb2ef5fdd10646d40fa276253b7e69bbf36cd6c1ca9d081b6"} Oct 07 12:41:32 crc kubenswrapper[5024]: I1007 12:41:32.980596 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" event={"ID":"135eccc1-c6b9-42f2-83f7-dd54c18c2ffc","Type":"ContainerStarted","Data":"9d92b66904302ecde0aa817c7f06837087d4d89e36bcf704d7e37655e98966f8"} Oct 07 12:41:32 crc kubenswrapper[5024]: I1007 12:41:32.981005 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:32 crc kubenswrapper[5024]: I1007 12:41:32.983180 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" event={"ID":"675166a8-2c18-4526-bb5f-84ef53f3fcd8","Type":"ContainerStarted","Data":"65ca25b1f7b63035682f23d1ee6d859e80066c31fead76e29f31e1b2dd772c06"} Oct 07 12:41:32 crc kubenswrapper[5024]: I1007 12:41:32.983366 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:41:33 crc kubenswrapper[5024]: I1007 12:41:33.000908 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" podStartSLOduration=1.522327202 podStartE2EDuration="7.000889107s" podCreationTimestamp="2025-10-07 12:41:26 +0000 UTC" firstStartedPulling="2025-10-07 12:41:27.333407413 +0000 UTC m=+825.409194251" lastFinishedPulling="2025-10-07 12:41:32.811969318 +0000 UTC m=+830.887756156" observedRunningTime="2025-10-07 12:41:32.99758342 +0000 UTC m=+831.073370258" watchObservedRunningTime="2025-10-07 12:41:33.000889107 +0000 UTC m=+831.076675945" Oct 07 12:41:33 crc kubenswrapper[5024]: I1007 12:41:33.023709 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" podStartSLOduration=1.567507987 podStartE2EDuration="7.023673145s" podCreationTimestamp="2025-10-07 12:41:26 +0000 UTC" firstStartedPulling="2025-10-07 12:41:27.336892236 +0000 UTC m=+825.412679074" lastFinishedPulling="2025-10-07 12:41:32.793057394 +0000 UTC m=+830.868844232" observedRunningTime="2025-10-07 12:41:33.018832923 +0000 UTC m=+831.094619771" watchObservedRunningTime="2025-10-07 12:41:33.023673145 +0000 UTC m=+831.099459983" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.484548 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.486344 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.500552 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.585064 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5ln\" (UniqueName: \"kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.585154 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.585248 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.686903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5ln\" (UniqueName: \"kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.686980 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.687030 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.687474 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.687575 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.706802 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5ln\" (UniqueName: \"kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln\") pod \"redhat-marketplace-qp7vd\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:34 crc kubenswrapper[5024]: I1007 12:41:34.805906 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:35 crc kubenswrapper[5024]: I1007 12:41:35.241301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:36 crc kubenswrapper[5024]: I1007 12:41:36.001441 5024 generic.go:334] "Generic (PLEG): container finished" podID="180ae149-69b8-4851-a836-853c401899b5" containerID="ce4251d784ba35f40539e803a59d19bb78bf364f9b2973f71476872c23d68de5" exitCode=0 Oct 07 12:41:36 crc kubenswrapper[5024]: I1007 12:41:36.001484 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerDied","Data":"ce4251d784ba35f40539e803a59d19bb78bf364f9b2973f71476872c23d68de5"} Oct 07 12:41:36 crc kubenswrapper[5024]: I1007 12:41:36.001766 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerStarted","Data":"146e4d8fdb872a4fc4db00c559f798a5bcfd17d09ae4bfbc11bf992970c1255c"} Oct 07 12:41:37 crc kubenswrapper[5024]: I1007 12:41:37.007966 5024 generic.go:334] "Generic (PLEG): container finished" podID="180ae149-69b8-4851-a836-853c401899b5" containerID="dd396b2d3bdc7c56fb39c9fc4f291afcf544e22b3b0d8670fbc06dbf8c16f059" exitCode=0 Oct 07 12:41:37 crc kubenswrapper[5024]: I1007 12:41:37.008273 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerDied","Data":"dd396b2d3bdc7c56fb39c9fc4f291afcf544e22b3b0d8670fbc06dbf8c16f059"} Oct 07 12:41:38 crc kubenswrapper[5024]: I1007 12:41:38.027259 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerStarted","Data":"5726d4d5eb43852526edade0ab7afee1925412d55f049c8798caec7af7074410"} Oct 07 12:41:38 crc kubenswrapper[5024]: I1007 12:41:38.053841 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qp7vd" podStartSLOduration=2.455453096 podStartE2EDuration="4.053824973s" podCreationTimestamp="2025-10-07 12:41:34 +0000 UTC" firstStartedPulling="2025-10-07 12:41:36.00289752 +0000 UTC m=+834.078684358" lastFinishedPulling="2025-10-07 12:41:37.601269397 +0000 UTC m=+835.677056235" observedRunningTime="2025-10-07 12:41:38.051018661 +0000 UTC m=+836.126805509" watchObservedRunningTime="2025-10-07 12:41:38.053824973 +0000 UTC m=+836.129611811" Oct 07 12:41:43 crc kubenswrapper[5024]: I1007 12:41:43.720685 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:41:43 crc kubenswrapper[5024]: I1007 12:41:43.721326 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:41:43 crc kubenswrapper[5024]: I1007 12:41:43.721382 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:41:43 crc kubenswrapper[5024]: I1007 12:41:43.722056 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:41:43 crc kubenswrapper[5024]: I1007 12:41:43.722125 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6" gracePeriod=600 Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.066460 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6" exitCode=0 Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.066573 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6"} Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.067012 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0"} Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.067082 5024 scope.go:117] "RemoveContainer" containerID="30aaacaa3f604c9e5d817c770e527cca534161741df28a433a5a25d34542b60e" Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.806776 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.807179 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:44 crc kubenswrapper[5024]: I1007 12:41:44.851202 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:45 crc kubenswrapper[5024]: I1007 12:41:45.134721 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:47 crc kubenswrapper[5024]: I1007 12:41:47.083405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-67c46766d-n845m" Oct 07 12:41:47 crc kubenswrapper[5024]: I1007 12:41:47.244630 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:47 crc kubenswrapper[5024]: I1007 12:41:47.245095 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qp7vd" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="registry-server" containerID="cri-o://5726d4d5eb43852526edade0ab7afee1925412d55f049c8798caec7af7074410" gracePeriod=2 Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.135029 5024 generic.go:334] "Generic (PLEG): container finished" podID="180ae149-69b8-4851-a836-853c401899b5" containerID="5726d4d5eb43852526edade0ab7afee1925412d55f049c8798caec7af7074410" exitCode=0 Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.135086 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerDied","Data":"5726d4d5eb43852526edade0ab7afee1925412d55f049c8798caec7af7074410"} Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.242443 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.412384 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities\") pod \"180ae149-69b8-4851-a836-853c401899b5\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.413095 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content\") pod \"180ae149-69b8-4851-a836-853c401899b5\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.414247 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw5ln\" (UniqueName: \"kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln\") pod \"180ae149-69b8-4851-a836-853c401899b5\" (UID: \"180ae149-69b8-4851-a836-853c401899b5\") " Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.413038 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities" (OuterVolumeSpecName: "utilities") pod "180ae149-69b8-4851-a836-853c401899b5" (UID: "180ae149-69b8-4851-a836-853c401899b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.420452 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln" (OuterVolumeSpecName: "kube-api-access-kw5ln") pod "180ae149-69b8-4851-a836-853c401899b5" (UID: "180ae149-69b8-4851-a836-853c401899b5"). InnerVolumeSpecName "kube-api-access-kw5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.426472 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "180ae149-69b8-4851-a836-853c401899b5" (UID: "180ae149-69b8-4851-a836-853c401899b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.515035 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw5ln\" (UniqueName: \"kubernetes.io/projected/180ae149-69b8-4851-a836-853c401899b5-kube-api-access-kw5ln\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.515409 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:48 crc kubenswrapper[5024]: I1007 12:41:48.515423 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180ae149-69b8-4851-a836-853c401899b5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.145552 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp7vd" event={"ID":"180ae149-69b8-4851-a836-853c401899b5","Type":"ContainerDied","Data":"146e4d8fdb872a4fc4db00c559f798a5bcfd17d09ae4bfbc11bf992970c1255c"} Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.145618 5024 scope.go:117] "RemoveContainer" containerID="5726d4d5eb43852526edade0ab7afee1925412d55f049c8798caec7af7074410" Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.145655 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp7vd" Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.169520 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.171799 5024 scope.go:117] "RemoveContainer" containerID="dd396b2d3bdc7c56fb39c9fc4f291afcf544e22b3b0d8670fbc06dbf8c16f059" Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.175252 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp7vd"] Oct 07 12:41:49 crc kubenswrapper[5024]: I1007 12:41:49.187729 5024 scope.go:117] "RemoveContainer" containerID="ce4251d784ba35f40539e803a59d19bb78bf364f9b2973f71476872c23d68de5" Oct 07 12:41:50 crc kubenswrapper[5024]: I1007 12:41:50.759227 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180ae149-69b8-4851-a836-853c401899b5" path="/var/lib/kubelet/pods/180ae149-69b8-4851-a836-853c401899b5/volumes" Oct 07 12:42:06 crc kubenswrapper[5024]: I1007 12:42:06.842829 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-664cbcfb76-9l549" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.498159 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl"] Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.498488 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="extract-utilities" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.498514 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="extract-utilities" Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.498524 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="registry-server" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.498532 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="registry-server" Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.498539 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="extract-content" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.498547 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="extract-content" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.498668 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="180ae149-69b8-4851-a836-853c401899b5" containerName="registry-server" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.499202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.501219 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h9p9w" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.501268 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t6frw"] Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.501559 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.517202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.520198 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.520270 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.533313 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl"] Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.584885 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rb2nn"] Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.585743 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.587489 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.587802 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.587863 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.588070 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zkc42" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.596912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d820a8c-2e7c-4433-a11d-8694890a25c3-cert\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.596951 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-reloader\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.596976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-sockets\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.596999 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945bj\" (UniqueName: \"kubernetes.io/projected/e1253450-f731-4770-b518-b8a4fa6138c5-kube-api-access-945bj\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.597182 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snqk\" (UniqueName: \"kubernetes.io/projected/8d820a8c-2e7c-4433-a11d-8694890a25c3-kube-api-access-7snqk\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.597228 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-conf\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.597339 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-metrics\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.597366 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.597386 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1253450-f731-4770-b518-b8a4fa6138c5-frr-startup\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.604455 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-zmgnv"] Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.605459 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.607020 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.654691 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-zmgnv"] Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-metrics\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/85813e66-e855-404d-b428-6329919f1a42-metallb-excludel2\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1253450-f731-4770-b518-b8a4fa6138c5-frr-startup\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699382 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.699406 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d820a8c-2e7c-4433-a11d-8694890a25c3-cert\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.699445 5024 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700372 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-reloader\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700206 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1253450-f731-4770-b518-b8a4fa6138c5-frr-startup\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.700421 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs podName:e1253450-f731-4770-b518-b8a4fa6138c5 nodeName:}" failed. No retries permitted until 2025-10-07 12:42:08.200398923 +0000 UTC m=+866.276185811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs") pod "frr-k8s-t6frw" (UID: "e1253450-f731-4770-b518-b8a4fa6138c5") : secret "frr-k8s-certs-secret" not found Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700536 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-sockets\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tgk\" (UniqueName: \"kubernetes.io/projected/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-kube-api-access-n8tgk\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700688 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945bj\" (UniqueName: \"kubernetes.io/projected/e1253450-f731-4770-b518-b8a4fa6138c5-kube-api-access-945bj\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700711 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-cert\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700742 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-reloader\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700778 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snqk\" (UniqueName: \"kubernetes.io/projected/8d820a8c-2e7c-4433-a11d-8694890a25c3-kube-api-access-7snqk\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-conf\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-sockets\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700868 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-metrics-certs\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.700909 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvp9k\" (UniqueName: \"kubernetes.io/projected/85813e66-e855-404d-b428-6329919f1a42-kube-api-access-rvp9k\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.701179 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-frr-conf\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.701403 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1253450-f731-4770-b518-b8a4fa6138c5-metrics\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.711331 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d820a8c-2e7c-4433-a11d-8694890a25c3-cert\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.721942 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snqk\" (UniqueName: \"kubernetes.io/projected/8d820a8c-2e7c-4433-a11d-8694890a25c3-kube-api-access-7snqk\") pod \"frr-k8s-webhook-server-64bf5d555-4c7dl\" (UID: \"8d820a8c-2e7c-4433-a11d-8694890a25c3\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.723450 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945bj\" (UniqueName: \"kubernetes.io/projected/e1253450-f731-4770-b518-b8a4fa6138c5-kube-api-access-945bj\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802462 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802529 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tgk\" (UniqueName: \"kubernetes.io/projected/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-kube-api-access-n8tgk\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802577 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-cert\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-metrics-certs\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802680 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvp9k\" (UniqueName: \"kubernetes.io/projected/85813e66-e855-404d-b428-6329919f1a42-kube-api-access-rvp9k\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/85813e66-e855-404d-b428-6329919f1a42-metallb-excludel2\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.802783 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.802931 5024 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.802988 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist podName:85813e66-e855-404d-b428-6329919f1a42 nodeName:}" failed. No retries permitted until 2025-10-07 12:42:08.30297094 +0000 UTC m=+866.378757788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist") pod "speaker-rb2nn" (UID: "85813e66-e855-404d-b428-6329919f1a42") : secret "metallb-memberlist" not found Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.804111 5024 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 07 12:42:07 crc kubenswrapper[5024]: E1007 12:42:07.804180 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs podName:85813e66-e855-404d-b428-6329919f1a42 nodeName:}" failed. No retries permitted until 2025-10-07 12:42:08.304167275 +0000 UTC m=+866.379954113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs") pod "speaker-rb2nn" (UID: "85813e66-e855-404d-b428-6329919f1a42") : secret "speaker-certs-secret" not found Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.806277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/85813e66-e855-404d-b428-6329919f1a42-metallb-excludel2\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.810478 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.810823 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-metrics-certs\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.822243 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-cert\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.827618 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tgk\" (UniqueName: \"kubernetes.io/projected/5da747bb-51b6-4f81-bb62-ee6f0a4849f9-kube-api-access-n8tgk\") pod \"controller-68d546b9d8-zmgnv\" (UID: \"5da747bb-51b6-4f81-bb62-ee6f0a4849f9\") " pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.830263 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvp9k\" (UniqueName: \"kubernetes.io/projected/85813e66-e855-404d-b428-6329919f1a42-kube-api-access-rvp9k\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.837735 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:07 crc kubenswrapper[5024]: I1007 12:42:07.918573 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.207310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.215331 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1253450-f731-4770-b518-b8a4fa6138c5-metrics-certs\") pod \"frr-k8s-t6frw\" (UID: \"e1253450-f731-4770-b518-b8a4fa6138c5\") " pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.227907 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl"] Oct 07 12:42:08 crc kubenswrapper[5024]: W1007 12:42:08.230544 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d820a8c_2e7c_4433_a11d_8694890a25c3.slice/crio-6fab63da7847d16e8c37ab2d1318a6457a511d20d15c59b608bc91c660d0d016 WatchSource:0}: Error finding container 6fab63da7847d16e8c37ab2d1318a6457a511d20d15c59b608bc91c660d0d016: Status 404 returned error can't find the container with id 6fab63da7847d16e8c37ab2d1318a6457a511d20d15c59b608bc91c660d0d016 Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.247626 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" event={"ID":"8d820a8c-2e7c-4433-a11d-8694890a25c3","Type":"ContainerStarted","Data":"6fab63da7847d16e8c37ab2d1318a6457a511d20d15c59b608bc91c660d0d016"} Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.309267 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-zmgnv"] Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.310002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.310060 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:08 crc kubenswrapper[5024]: E1007 12:42:08.310329 5024 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 07 12:42:08 crc kubenswrapper[5024]: E1007 12:42:08.310388 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist podName:85813e66-e855-404d-b428-6329919f1a42 nodeName:}" failed. No retries permitted until 2025-10-07 12:42:09.310369154 +0000 UTC m=+867.386156032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist") pod "speaker-rb2nn" (UID: "85813e66-e855-404d-b428-6329919f1a42") : secret "metallb-memberlist" not found Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.313938 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-metrics-certs\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:08 crc kubenswrapper[5024]: I1007 12:42:08.449889 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.254456 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-zmgnv" event={"ID":"5da747bb-51b6-4f81-bb62-ee6f0a4849f9","Type":"ContainerStarted","Data":"79055f4b797696d4efbf8b6174657d0f6af3aedaff7b44d81a425b5ff3e5c0fa"} Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.254501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-zmgnv" event={"ID":"5da747bb-51b6-4f81-bb62-ee6f0a4849f9","Type":"ContainerStarted","Data":"167bd7a7079626c929a24c52496fb6791d1d0a86abb71350b9839b7bd892af90"} Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.254511 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-zmgnv" event={"ID":"5da747bb-51b6-4f81-bb62-ee6f0a4849f9","Type":"ContainerStarted","Data":"0bdbd11dbb9f8634b5eb756c6d48da8465101ab5fbc1730fea473b93a843117a"} Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.254594 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.255717 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"18d9ace0dc13dbc1c3a03b9f9c7e32071938c9168463dc243c26acd7e26469c7"} Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.270352 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-zmgnv" podStartSLOduration=2.270330596 podStartE2EDuration="2.270330596s" podCreationTimestamp="2025-10-07 12:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:42:09.269616645 +0000 UTC m=+867.345403503" watchObservedRunningTime="2025-10-07 12:42:09.270330596 +0000 UTC m=+867.346117434" Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.322623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.328047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/85813e66-e855-404d-b428-6329919f1a42-memberlist\") pod \"speaker-rb2nn\" (UID: \"85813e66-e855-404d-b428-6329919f1a42\") " pod="metallb-system/speaker-rb2nn" Oct 07 12:42:09 crc kubenswrapper[5024]: I1007 12:42:09.399218 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rb2nn" Oct 07 12:42:09 crc kubenswrapper[5024]: W1007 12:42:09.418484 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85813e66_e855_404d_b428_6329919f1a42.slice/crio-573f59afc031053128eb5b15c80a4fbef74ae298ad5aed3181f5043ac88bf0e8 WatchSource:0}: Error finding container 573f59afc031053128eb5b15c80a4fbef74ae298ad5aed3181f5043ac88bf0e8: Status 404 returned error can't find the container with id 573f59afc031053128eb5b15c80a4fbef74ae298ad5aed3181f5043ac88bf0e8 Oct 07 12:42:10 crc kubenswrapper[5024]: I1007 12:42:10.262858 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rb2nn" event={"ID":"85813e66-e855-404d-b428-6329919f1a42","Type":"ContainerStarted","Data":"0751ba624d5c529c35c987ae80f4438739ef950cab5b73302da9b9c3602b964c"} Oct 07 12:42:10 crc kubenswrapper[5024]: I1007 12:42:10.263171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rb2nn" event={"ID":"85813e66-e855-404d-b428-6329919f1a42","Type":"ContainerStarted","Data":"d3b9684815751bd7df026c93715758a31e66ba23c68b920e0ed5a8b055b5e708"} Oct 07 12:42:10 crc kubenswrapper[5024]: I1007 12:42:10.263185 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rb2nn" event={"ID":"85813e66-e855-404d-b428-6329919f1a42","Type":"ContainerStarted","Data":"573f59afc031053128eb5b15c80a4fbef74ae298ad5aed3181f5043ac88bf0e8"} Oct 07 12:42:10 crc kubenswrapper[5024]: I1007 12:42:10.263811 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rb2nn" Oct 07 12:42:10 crc kubenswrapper[5024]: I1007 12:42:10.283603 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rb2nn" podStartSLOduration=3.283584779 podStartE2EDuration="3.283584779s" podCreationTimestamp="2025-10-07 12:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:42:10.281313382 +0000 UTC m=+868.357100250" watchObservedRunningTime="2025-10-07 12:42:10.283584779 +0000 UTC m=+868.359371627" Oct 07 12:42:16 crc kubenswrapper[5024]: I1007 12:42:16.308881 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" event={"ID":"8d820a8c-2e7c-4433-a11d-8694890a25c3","Type":"ContainerStarted","Data":"aa4af5fcb97fee2b05a00fa837e07d0a62c9c2f756ffdf63ae8ae81c2aaacaa4"} Oct 07 12:42:16 crc kubenswrapper[5024]: I1007 12:42:16.309311 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:16 crc kubenswrapper[5024]: I1007 12:42:16.310653 5024 generic.go:334] "Generic (PLEG): container finished" podID="e1253450-f731-4770-b518-b8a4fa6138c5" containerID="4e7c26ed32f3f421b02dce3222bb6c34ecaf9669d93110ba03466ec76ebd10ea" exitCode=0 Oct 07 12:42:16 crc kubenswrapper[5024]: I1007 12:42:16.310691 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerDied","Data":"4e7c26ed32f3f421b02dce3222bb6c34ecaf9669d93110ba03466ec76ebd10ea"} Oct 07 12:42:16 crc kubenswrapper[5024]: I1007 12:42:16.324610 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" podStartSLOduration=1.560605671 podStartE2EDuration="9.324595882s" podCreationTimestamp="2025-10-07 12:42:07 +0000 UTC" firstStartedPulling="2025-10-07 12:42:08.233347277 +0000 UTC m=+866.309134115" lastFinishedPulling="2025-10-07 12:42:15.997337488 +0000 UTC m=+874.073124326" observedRunningTime="2025-10-07 12:42:16.321632955 +0000 UTC m=+874.397419803" watchObservedRunningTime="2025-10-07 12:42:16.324595882 +0000 UTC m=+874.400382720" Oct 07 12:42:17 crc kubenswrapper[5024]: I1007 12:42:17.318452 5024 generic.go:334] "Generic (PLEG): container finished" podID="e1253450-f731-4770-b518-b8a4fa6138c5" containerID="9ec9467726cc39ca3c2e491f70b64789826db4e648d8c3d1e21c10be39d8b84b" exitCode=0 Oct 07 12:42:17 crc kubenswrapper[5024]: I1007 12:42:17.318526 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerDied","Data":"9ec9467726cc39ca3c2e491f70b64789826db4e648d8c3d1e21c10be39d8b84b"} Oct 07 12:42:18 crc kubenswrapper[5024]: I1007 12:42:18.328166 5024 generic.go:334] "Generic (PLEG): container finished" podID="e1253450-f731-4770-b518-b8a4fa6138c5" containerID="647339839e1f38fc2dc5c795362a9a8ae5d26420142d054af2a5747bc64a83fd" exitCode=0 Oct 07 12:42:18 crc kubenswrapper[5024]: I1007 12:42:18.328289 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerDied","Data":"647339839e1f38fc2dc5c795362a9a8ae5d26420142d054af2a5747bc64a83fd"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.341012 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"1ab3cf3a019e8ace99bd15995b87696dfb23645d579b8b83c4e0db4079595837"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.341531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"57d6f867dccbd5168ba506399c8b5ba87d21203e710defd4763960bd36b24c8e"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.341553 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"1c54c54ca372f188658aa65c759a2374b34ad98c15251abec9afa865ba94b707"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.341565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"369dc049553843a49e0218ada50aa4694b26adab52ce1aee1aa18c1659f82d78"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.341578 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"d149861caa58de28bccd0abaacc372844d885083ba87c950c25afaeb142ebdbc"} Oct 07 12:42:19 crc kubenswrapper[5024]: I1007 12:42:19.402657 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rb2nn" Oct 07 12:42:20 crc kubenswrapper[5024]: I1007 12:42:20.350200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6frw" event={"ID":"e1253450-f731-4770-b518-b8a4fa6138c5","Type":"ContainerStarted","Data":"a15963c47b5f4a7a84ed09dea4eb05dcc4b24a6dfcef94449155d8c400b3ecb0"} Oct 07 12:42:20 crc kubenswrapper[5024]: I1007 12:42:20.350564 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:20 crc kubenswrapper[5024]: I1007 12:42:20.394200 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t6frw" podStartSLOduration=5.995714525 podStartE2EDuration="13.394181092s" podCreationTimestamp="2025-10-07 12:42:07 +0000 UTC" firstStartedPulling="2025-10-07 12:42:08.577489425 +0000 UTC m=+866.653276263" lastFinishedPulling="2025-10-07 12:42:15.975955992 +0000 UTC m=+874.051742830" observedRunningTime="2025-10-07 12:42:20.391984428 +0000 UTC m=+878.467771276" watchObservedRunningTime="2025-10-07 12:42:20.394181092 +0000 UTC m=+878.469967930" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.144880 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.146356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.150674 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.151066 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.151127 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mgbtf" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.159813 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.204158 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsfl\" (UniqueName: \"kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl\") pod \"openstack-operator-index-2zxd5\" (UID: \"b748d97d-9f9f-46ac-a7c4-5015d44473b2\") " pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.305413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsfl\" (UniqueName: \"kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl\") pod \"openstack-operator-index-2zxd5\" (UID: \"b748d97d-9f9f-46ac-a7c4-5015d44473b2\") " pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.324159 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsfl\" (UniqueName: \"kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl\") pod \"openstack-operator-index-2zxd5\" (UID: \"b748d97d-9f9f-46ac-a7c4-5015d44473b2\") " pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.464820 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:22 crc kubenswrapper[5024]: I1007 12:42:22.864735 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:22 crc kubenswrapper[5024]: W1007 12:42:22.868908 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb748d97d_9f9f_46ac_a7c4_5015d44473b2.slice/crio-64991a7cac45097f77d9b54191cf8a88dc324f0382f25f1a4402f3502feb3755 WatchSource:0}: Error finding container 64991a7cac45097f77d9b54191cf8a88dc324f0382f25f1a4402f3502feb3755: Status 404 returned error can't find the container with id 64991a7cac45097f77d9b54191cf8a88dc324f0382f25f1a4402f3502feb3755 Oct 07 12:42:23 crc kubenswrapper[5024]: I1007 12:42:23.368536 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zxd5" event={"ID":"b748d97d-9f9f-46ac-a7c4-5015d44473b2","Type":"ContainerStarted","Data":"64991a7cac45097f77d9b54191cf8a88dc324f0382f25f1a4402f3502feb3755"} Oct 07 12:42:23 crc kubenswrapper[5024]: I1007 12:42:23.450926 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:23 crc kubenswrapper[5024]: I1007 12:42:23.488886 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:25 crc kubenswrapper[5024]: I1007 12:42:25.330954 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:25 crc kubenswrapper[5024]: I1007 12:42:25.935888 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8jtv2"] Oct 07 12:42:25 crc kubenswrapper[5024]: I1007 12:42:25.936821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:25 crc kubenswrapper[5024]: I1007 12:42:25.944741 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8jtv2"] Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.056887 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qml4\" (UniqueName: \"kubernetes.io/projected/32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3-kube-api-access-9qml4\") pod \"openstack-operator-index-8jtv2\" (UID: \"32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3\") " pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.158171 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qml4\" (UniqueName: \"kubernetes.io/projected/32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3-kube-api-access-9qml4\") pod \"openstack-operator-index-8jtv2\" (UID: \"32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3\") " pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.177033 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qml4\" (UniqueName: \"kubernetes.io/projected/32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3-kube-api-access-9qml4\") pod \"openstack-operator-index-8jtv2\" (UID: \"32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3\") " pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.250171 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.393786 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zxd5" event={"ID":"b748d97d-9f9f-46ac-a7c4-5015d44473b2","Type":"ContainerStarted","Data":"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5"} Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.393934 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2zxd5" podUID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" containerName="registry-server" containerID="cri-o://0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5" gracePeriod=2 Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.411428 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2zxd5" podStartSLOduration=1.8066770490000001 podStartE2EDuration="4.411410007s" podCreationTimestamp="2025-10-07 12:42:22 +0000 UTC" firstStartedPulling="2025-10-07 12:42:22.871321989 +0000 UTC m=+880.947108827" lastFinishedPulling="2025-10-07 12:42:25.476054957 +0000 UTC m=+883.551841785" observedRunningTime="2025-10-07 12:42:26.408834762 +0000 UTC m=+884.484621600" watchObservedRunningTime="2025-10-07 12:42:26.411410007 +0000 UTC m=+884.487196845" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.663541 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8jtv2"] Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.745202 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.868692 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hsfl\" (UniqueName: \"kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl\") pod \"b748d97d-9f9f-46ac-a7c4-5015d44473b2\" (UID: \"b748d97d-9f9f-46ac-a7c4-5015d44473b2\") " Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.876328 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl" (OuterVolumeSpecName: "kube-api-access-4hsfl") pod "b748d97d-9f9f-46ac-a7c4-5015d44473b2" (UID: "b748d97d-9f9f-46ac-a7c4-5015d44473b2"). InnerVolumeSpecName "kube-api-access-4hsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:42:26 crc kubenswrapper[5024]: I1007 12:42:26.970205 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hsfl\" (UniqueName: \"kubernetes.io/projected/b748d97d-9f9f-46ac-a7c4-5015d44473b2-kube-api-access-4hsfl\") on node \"crc\" DevicePath \"\"" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.403081 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8jtv2" event={"ID":"32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3","Type":"ContainerStarted","Data":"44a69d8958041bffb293fd0400273dc47ecbf44eaf9bb6e753c7cbd82fd1c344"} Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405182 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8jtv2" event={"ID":"32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3","Type":"ContainerStarted","Data":"1fe5db7ef366c4af2bdaa3c334f3d344c486dd6bc47214cf5f4f9b3a26f64c71"} Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405700 5024 generic.go:334] "Generic (PLEG): container finished" podID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" containerID="0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5" exitCode=0 Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zxd5" event={"ID":"b748d97d-9f9f-46ac-a7c4-5015d44473b2","Type":"ContainerDied","Data":"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5"} Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405754 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2zxd5" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405779 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2zxd5" event={"ID":"b748d97d-9f9f-46ac-a7c4-5015d44473b2","Type":"ContainerDied","Data":"64991a7cac45097f77d9b54191cf8a88dc324f0382f25f1a4402f3502feb3755"} Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.405800 5024 scope.go:117] "RemoveContainer" containerID="0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.420964 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8jtv2" podStartSLOduration=2.371329957 podStartE2EDuration="2.420949322s" podCreationTimestamp="2025-10-07 12:42:25 +0000 UTC" firstStartedPulling="2025-10-07 12:42:26.673765448 +0000 UTC m=+884.749552286" lastFinishedPulling="2025-10-07 12:42:26.723384813 +0000 UTC m=+884.799171651" observedRunningTime="2025-10-07 12:42:27.419661315 +0000 UTC m=+885.495448143" watchObservedRunningTime="2025-10-07 12:42:27.420949322 +0000 UTC m=+885.496736160" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.429241 5024 scope.go:117] "RemoveContainer" containerID="0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5" Oct 07 12:42:27 crc kubenswrapper[5024]: E1007 12:42:27.429765 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5\": container with ID starting with 0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5 not found: ID does not exist" containerID="0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.429816 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5"} err="failed to get container status \"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5\": rpc error: code = NotFound desc = could not find container \"0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5\": container with ID starting with 0157dbf6863fe0d36e4f466a811bb977cf6a188ba5a2a93bedd8cfd7f42186a5 not found: ID does not exist" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.443490 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.447256 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2zxd5"] Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.843868 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-4c7dl" Oct 07 12:42:27 crc kubenswrapper[5024]: I1007 12:42:27.923353 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-zmgnv" Oct 07 12:42:28 crc kubenswrapper[5024]: I1007 12:42:28.453702 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t6frw" Oct 07 12:42:28 crc kubenswrapper[5024]: I1007 12:42:28.759231 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" path="/var/lib/kubelet/pods/b748d97d-9f9f-46ac-a7c4-5015d44473b2/volumes" Oct 07 12:42:36 crc kubenswrapper[5024]: I1007 12:42:36.251172 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:36 crc kubenswrapper[5024]: I1007 12:42:36.251896 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:36 crc kubenswrapper[5024]: I1007 12:42:36.276390 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:36 crc kubenswrapper[5024]: I1007 12:42:36.477086 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8jtv2" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.972726 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw"] Oct 07 12:42:37 crc kubenswrapper[5024]: E1007 12:42:37.973020 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" containerName="registry-server" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.973036 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" containerName="registry-server" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.973290 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b748d97d-9f9f-46ac-a7c4-5015d44473b2" containerName="registry-server" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.974313 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.977095 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qh6th" Oct 07 12:42:37 crc kubenswrapper[5024]: I1007 12:42:37.985603 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw"] Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.022256 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.022317 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppsd\" (UniqueName: \"kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.022377 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.124035 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.124236 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.124315 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppsd\" (UniqueName: \"kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.124668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.125079 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.144686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppsd\" (UniqueName: \"kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd\") pod \"c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.291464 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:38 crc kubenswrapper[5024]: I1007 12:42:38.726100 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw"] Oct 07 12:42:39 crc kubenswrapper[5024]: I1007 12:42:39.486171 5024 generic.go:334] "Generic (PLEG): container finished" podID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerID="c4ba2ca995185f5ddaeb3b77b352c4279ec77711216e37cc22778fda1559ba88" exitCode=0 Oct 07 12:42:39 crc kubenswrapper[5024]: I1007 12:42:39.486241 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" event={"ID":"21da6c14-ee6e-408d-aecd-ffb8cdeebc99","Type":"ContainerDied","Data":"c4ba2ca995185f5ddaeb3b77b352c4279ec77711216e37cc22778fda1559ba88"} Oct 07 12:42:39 crc kubenswrapper[5024]: I1007 12:42:39.486637 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" event={"ID":"21da6c14-ee6e-408d-aecd-ffb8cdeebc99","Type":"ContainerStarted","Data":"6207f1ca6e08b427257198a39666742e6a1dba678d9cd578a092ee83f6b82214"} Oct 07 12:42:40 crc kubenswrapper[5024]: I1007 12:42:40.497065 5024 generic.go:334] "Generic (PLEG): container finished" podID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerID="16299a5aa96004e1fe1acc2847ce25555f5212620869762b91a4d9a769a7bbfd" exitCode=0 Oct 07 12:42:40 crc kubenswrapper[5024]: I1007 12:42:40.497174 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" event={"ID":"21da6c14-ee6e-408d-aecd-ffb8cdeebc99","Type":"ContainerDied","Data":"16299a5aa96004e1fe1acc2847ce25555f5212620869762b91a4d9a769a7bbfd"} Oct 07 12:42:41 crc kubenswrapper[5024]: I1007 12:42:41.508889 5024 generic.go:334] "Generic (PLEG): container finished" podID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerID="5f9c453400c88d6d82901f98ad38184ad1b0b74b5eed7db60868a0f248a6a989" exitCode=0 Oct 07 12:42:41 crc kubenswrapper[5024]: I1007 12:42:41.508942 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" event={"ID":"21da6c14-ee6e-408d-aecd-ffb8cdeebc99","Type":"ContainerDied","Data":"5f9c453400c88d6d82901f98ad38184ad1b0b74b5eed7db60868a0f248a6a989"} Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.745450 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.805734 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sppsd\" (UniqueName: \"kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd\") pod \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.805886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle\") pod \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.805963 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util\") pod \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\" (UID: \"21da6c14-ee6e-408d-aecd-ffb8cdeebc99\") " Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.807352 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle" (OuterVolumeSpecName: "bundle") pod "21da6c14-ee6e-408d-aecd-ffb8cdeebc99" (UID: "21da6c14-ee6e-408d-aecd-ffb8cdeebc99"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.817427 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd" (OuterVolumeSpecName: "kube-api-access-sppsd") pod "21da6c14-ee6e-408d-aecd-ffb8cdeebc99" (UID: "21da6c14-ee6e-408d-aecd-ffb8cdeebc99"). InnerVolumeSpecName "kube-api-access-sppsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.820436 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util" (OuterVolumeSpecName: "util") pod "21da6c14-ee6e-408d-aecd-ffb8cdeebc99" (UID: "21da6c14-ee6e-408d-aecd-ffb8cdeebc99"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.907738 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sppsd\" (UniqueName: \"kubernetes.io/projected/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-kube-api-access-sppsd\") on node \"crc\" DevicePath \"\"" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.908187 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:42:42 crc kubenswrapper[5024]: I1007 12:42:42.908200 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21da6c14-ee6e-408d-aecd-ffb8cdeebc99-util\") on node \"crc\" DevicePath \"\"" Oct 07 12:42:43 crc kubenswrapper[5024]: I1007 12:42:43.520611 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" event={"ID":"21da6c14-ee6e-408d-aecd-ffb8cdeebc99","Type":"ContainerDied","Data":"6207f1ca6e08b427257198a39666742e6a1dba678d9cd578a092ee83f6b82214"} Oct 07 12:42:43 crc kubenswrapper[5024]: I1007 12:42:43.520653 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6207f1ca6e08b427257198a39666742e6a1dba678d9cd578a092ee83f6b82214" Oct 07 12:42:43 crc kubenswrapper[5024]: I1007 12:42:43.520731 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.453818 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r"] Oct 07 12:42:50 crc kubenswrapper[5024]: E1007 12:42:50.454587 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="extract" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.454599 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="extract" Oct 07 12:42:50 crc kubenswrapper[5024]: E1007 12:42:50.454609 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="util" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.454615 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="util" Oct 07 12:42:50 crc kubenswrapper[5024]: E1007 12:42:50.454625 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="pull" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.454632 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="pull" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.454749 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="21da6c14-ee6e-408d-aecd-ffb8cdeebc99" containerName="extract" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.455402 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.461102 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-wgs9w" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.481556 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r"] Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.514710 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq65f\" (UniqueName: \"kubernetes.io/projected/f42206e4-8007-40cd-9a9d-264867871e2c-kube-api-access-gq65f\") pod \"openstack-operator-controller-operator-744d6c8d8d-zk24r\" (UID: \"f42206e4-8007-40cd-9a9d-264867871e2c\") " pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.615767 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq65f\" (UniqueName: \"kubernetes.io/projected/f42206e4-8007-40cd-9a9d-264867871e2c-kube-api-access-gq65f\") pod \"openstack-operator-controller-operator-744d6c8d8d-zk24r\" (UID: \"f42206e4-8007-40cd-9a9d-264867871e2c\") " pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.634235 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq65f\" (UniqueName: \"kubernetes.io/projected/f42206e4-8007-40cd-9a9d-264867871e2c-kube-api-access-gq65f\") pod \"openstack-operator-controller-operator-744d6c8d8d-zk24r\" (UID: \"f42206e4-8007-40cd-9a9d-264867871e2c\") " pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.771594 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:50 crc kubenswrapper[5024]: I1007 12:42:50.994019 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r"] Oct 07 12:42:50 crc kubenswrapper[5024]: W1007 12:42:50.997454 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf42206e4_8007_40cd_9a9d_264867871e2c.slice/crio-12109b6453d342d1a5c4d7708f10d900aa0101303ea0a4c2f1f2500897301674 WatchSource:0}: Error finding container 12109b6453d342d1a5c4d7708f10d900aa0101303ea0a4c2f1f2500897301674: Status 404 returned error can't find the container with id 12109b6453d342d1a5c4d7708f10d900aa0101303ea0a4c2f1f2500897301674 Oct 07 12:42:51 crc kubenswrapper[5024]: I1007 12:42:51.569804 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" event={"ID":"f42206e4-8007-40cd-9a9d-264867871e2c","Type":"ContainerStarted","Data":"12109b6453d342d1a5c4d7708f10d900aa0101303ea0a4c2f1f2500897301674"} Oct 07 12:42:55 crc kubenswrapper[5024]: I1007 12:42:55.612119 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" event={"ID":"f42206e4-8007-40cd-9a9d-264867871e2c","Type":"ContainerStarted","Data":"0ce5b8c635d0e7637ff9285ab992e6cc4474cd68b38ef6e07e55bbd1b418efbc"} Oct 07 12:42:57 crc kubenswrapper[5024]: I1007 12:42:57.626666 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" event={"ID":"f42206e4-8007-40cd-9a9d-264867871e2c","Type":"ContainerStarted","Data":"8ec76fb5eff8db07cfa332ecfea0a5c3a56cc7edd76f1946cec90c2dc49aaa51"} Oct 07 12:42:57 crc kubenswrapper[5024]: I1007 12:42:57.626990 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:42:57 crc kubenswrapper[5024]: I1007 12:42:57.675676 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" podStartSLOduration=1.426393263 podStartE2EDuration="7.67564533s" podCreationTimestamp="2025-10-07 12:42:50 +0000 UTC" firstStartedPulling="2025-10-07 12:42:50.999997633 +0000 UTC m=+909.075784461" lastFinishedPulling="2025-10-07 12:42:57.24924969 +0000 UTC m=+915.325036528" observedRunningTime="2025-10-07 12:42:57.658985521 +0000 UTC m=+915.734772379" watchObservedRunningTime="2025-10-07 12:42:57.67564533 +0000 UTC m=+915.751432188" Oct 07 12:43:00 crc kubenswrapper[5024]: I1007 12:43:00.774886 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-744d6c8d8d-zk24r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.541112 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.544019 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.548491 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-h4w86" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.555218 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.556585 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.562028 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ztmlv" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.576790 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.578438 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcwhl\" (UniqueName: \"kubernetes.io/projected/39116473-582b-4f61-b3c8-44ab955c277b-kube-api-access-jcwhl\") pod \"barbican-operator-controller-manager-58c4cd55f4-6tzwp\" (UID: \"39116473-582b-4f61-b3c8-44ab955c277b\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.578530 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbfpv\" (UniqueName: \"kubernetes.io/projected/52744582-1aca-4f75-8dc3-337a19ab3fba-kube-api-access-rbfpv\") pod \"cinder-operator-controller-manager-646554d9b9-dzx9r\" (UID: \"52744582-1aca-4f75-8dc3-337a19ab3fba\") " pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.610191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.624582 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.625744 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.632961 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jqlbn" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.655803 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.657163 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.658405 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.669745 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6swgg" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.679921 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbfpv\" (UniqueName: \"kubernetes.io/projected/52744582-1aca-4f75-8dc3-337a19ab3fba-kube-api-access-rbfpv\") pod \"cinder-operator-controller-manager-646554d9b9-dzx9r\" (UID: \"52744582-1aca-4f75-8dc3-337a19ab3fba\") " pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.679994 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcwhl\" (UniqueName: \"kubernetes.io/projected/39116473-582b-4f61-b3c8-44ab955c277b-kube-api-access-jcwhl\") pod \"barbican-operator-controller-manager-58c4cd55f4-6tzwp\" (UID: \"39116473-582b-4f61-b3c8-44ab955c277b\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.680050 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgg26\" (UniqueName: \"kubernetes.io/projected/75de1768-d533-460a-8397-012ef25ade39-kube-api-access-dgg26\") pod \"designate-operator-controller-manager-75dfd9b554-8r8zb\" (UID: \"75de1768-d533-460a-8397-012ef25ade39\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.680090 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsjc\" (UniqueName: \"kubernetes.io/projected/a1d2d630-766f-4486-b909-6f622bdc9748-kube-api-access-4hsjc\") pod \"heat-operator-controller-manager-54b4974c45-zbddw\" (UID: \"a1d2d630-766f-4486-b909-6f622bdc9748\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.687585 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.688721 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.694672 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2vb74" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.694871 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.701073 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.702427 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.706506 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-89v4k" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.716554 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcwhl\" (UniqueName: \"kubernetes.io/projected/39116473-582b-4f61-b3c8-44ab955c277b-kube-api-access-jcwhl\") pod \"barbican-operator-controller-manager-58c4cd55f4-6tzwp\" (UID: \"39116473-582b-4f61-b3c8-44ab955c277b\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.730206 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.732503 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbfpv\" (UniqueName: \"kubernetes.io/projected/52744582-1aca-4f75-8dc3-337a19ab3fba-kube-api-access-rbfpv\") pod \"cinder-operator-controller-manager-646554d9b9-dzx9r\" (UID: \"52744582-1aca-4f75-8dc3-337a19ab3fba\") " pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.734971 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.751547 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.761105 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.761733 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.765266 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.765511 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4b4jj" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgg26\" (UniqueName: \"kubernetes.io/projected/75de1768-d533-460a-8397-012ef25ade39-kube-api-access-dgg26\") pod \"designate-operator-controller-manager-75dfd9b554-8r8zb\" (UID: \"75de1768-d533-460a-8397-012ef25ade39\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781835 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsjc\" (UniqueName: \"kubernetes.io/projected/a1d2d630-766f-4486-b909-6f622bdc9748-kube-api-access-4hsjc\") pod \"heat-operator-controller-manager-54b4974c45-zbddw\" (UID: \"a1d2d630-766f-4486-b909-6f622bdc9748\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781885 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9zg9\" (UniqueName: \"kubernetes.io/projected/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-kube-api-access-g9zg9\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781906 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2lr\" (UniqueName: \"kubernetes.io/projected/7df3da1a-3dc0-400e-a3a6-4878652ecfdc-kube-api-access-9r2lr\") pod \"glance-operator-controller-manager-5dc44df7d5-dszfh\" (UID: \"7df3da1a-3dc0-400e-a3a6-4878652ecfdc\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781952 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.781971 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghg6\" (UniqueName: \"kubernetes.io/projected/f2285083-77e3-448b-b4f0-27adfb683e17-kube-api-access-6ghg6\") pod \"horizon-operator-controller-manager-76d5b87f47-qnjtl\" (UID: \"f2285083-77e3-448b-b4f0-27adfb683e17\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.792015 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-xscld"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.792962 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.795310 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pkf8p" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.799721 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgg26\" (UniqueName: \"kubernetes.io/projected/75de1768-d533-460a-8397-012ef25ade39-kube-api-access-dgg26\") pod \"designate-operator-controller-manager-75dfd9b554-8r8zb\" (UID: \"75de1768-d533-460a-8397-012ef25ade39\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.800883 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.808739 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsjc\" (UniqueName: \"kubernetes.io/projected/a1d2d630-766f-4486-b909-6f622bdc9748-kube-api-access-4hsjc\") pod \"heat-operator-controller-manager-54b4974c45-zbddw\" (UID: \"a1d2d630-766f-4486-b909-6f622bdc9748\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.812088 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.813046 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.813604 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.818105 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-st6js" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.824336 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ql8nz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.831865 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-xscld"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.846538 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.854563 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.855719 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.860287 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zsx7s" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.862513 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.872589 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.875487 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.879252 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.883875 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2khj\" (UniqueName: \"kubernetes.io/projected/4a2174c3-a953-4535-9b70-5414c07633c0-kube-api-access-z2khj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4\" (UID: \"4a2174c3-a953-4535-9b70-5414c07633c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.883955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllp7\" (UniqueName: \"kubernetes.io/projected/55b45b25-e171-4e43-8da0-b18c06e7515b-kube-api-access-wllp7\") pod \"manila-operator-controller-manager-65d89cfd9f-l6h5v\" (UID: \"55b45b25-e171-4e43-8da0-b18c06e7515b\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.883978 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7sm\" (UniqueName: \"kubernetes.io/projected/8c026852-45e6-4a05-bd27-3af46438df69-kube-api-access-sk7sm\") pod \"ironic-operator-controller-manager-649675d675-xscld\" (UID: \"8c026852-45e6-4a05-bd27-3af46438df69\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.884036 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9zg9\" (UniqueName: \"kubernetes.io/projected/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-kube-api-access-g9zg9\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.884059 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2lr\" (UniqueName: \"kubernetes.io/projected/7df3da1a-3dc0-400e-a3a6-4878652ecfdc-kube-api-access-9r2lr\") pod \"glance-operator-controller-manager-5dc44df7d5-dszfh\" (UID: \"7df3da1a-3dc0-400e-a3a6-4878652ecfdc\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.884096 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.884111 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ghg6\" (UniqueName: \"kubernetes.io/projected/f2285083-77e3-448b-b4f0-27adfb683e17-kube-api-access-6ghg6\") pod \"horizon-operator-controller-manager-76d5b87f47-qnjtl\" (UID: \"f2285083-77e3-448b-b4f0-27adfb683e17\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.884172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vhd\" (UniqueName: \"kubernetes.io/projected/be793dd5-2676-4289-961a-9e6c0731b13a-kube-api-access-h9vhd\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-rwftw\" (UID: \"be793dd5-2676-4289-961a-9e6c0731b13a\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:17 crc kubenswrapper[5024]: E1007 12:43:17.885394 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 07 12:43:17 crc kubenswrapper[5024]: E1007 12:43:17.885451 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert podName:0a3c98f6-0e02-4493-b3d6-f030d73ca3ac nodeName:}" failed. No retries permitted until 2025-10-07 12:43:18.385420753 +0000 UTC m=+936.461207591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert") pod "infra-operator-controller-manager-658588b8c9-7fsjz" (UID: "0a3c98f6-0e02-4493-b3d6-f030d73ca3ac") : secret "infra-operator-webhook-server-cert" not found Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.890182 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.891199 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.893760 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fkjdb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.910336 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.928224 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.929866 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2lr\" (UniqueName: \"kubernetes.io/projected/7df3da1a-3dc0-400e-a3a6-4878652ecfdc-kube-api-access-9r2lr\") pod \"glance-operator-controller-manager-5dc44df7d5-dszfh\" (UID: \"7df3da1a-3dc0-400e-a3a6-4878652ecfdc\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.931162 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9zg9\" (UniqueName: \"kubernetes.io/projected/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-kube-api-access-g9zg9\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.934488 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.938337 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ghg6\" (UniqueName: \"kubernetes.io/projected/f2285083-77e3-448b-b4f0-27adfb683e17-kube-api-access-6ghg6\") pod \"horizon-operator-controller-manager-76d5b87f47-qnjtl\" (UID: \"f2285083-77e3-448b-b4f0-27adfb683e17\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.940439 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.949651 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.950156 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c9q4k" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.950620 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.950745 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.953509 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b4s6m" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.960208 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.972338 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.975198 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.978635 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.978897 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-k98nw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.987540 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllp7\" (UniqueName: \"kubernetes.io/projected/55b45b25-e171-4e43-8da0-b18c06e7515b-kube-api-access-wllp7\") pod \"manila-operator-controller-manager-65d89cfd9f-l6h5v\" (UID: \"55b45b25-e171-4e43-8da0-b18c06e7515b\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.987590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7sm\" (UniqueName: \"kubernetes.io/projected/8c026852-45e6-4a05-bd27-3af46438df69-kube-api-access-sk7sm\") pod \"ironic-operator-controller-manager-649675d675-xscld\" (UID: \"8c026852-45e6-4a05-bd27-3af46438df69\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.987667 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vhd\" (UniqueName: \"kubernetes.io/projected/be793dd5-2676-4289-961a-9e6c0731b13a-kube-api-access-h9vhd\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-rwftw\" (UID: \"be793dd5-2676-4289-961a-9e6c0731b13a\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.987702 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2khj\" (UniqueName: \"kubernetes.io/projected/4a2174c3-a953-4535-9b70-5414c07633c0-kube-api-access-z2khj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4\" (UID: \"4a2174c3-a953-4535-9b70-5414c07633c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.989845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb"] Oct 07 12:43:17 crc kubenswrapper[5024]: I1007 12:43:17.998100 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.009538 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.023505 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7sm\" (UniqueName: \"kubernetes.io/projected/8c026852-45e6-4a05-bd27-3af46438df69-kube-api-access-sk7sm\") pod \"ironic-operator-controller-manager-649675d675-xscld\" (UID: \"8c026852-45e6-4a05-bd27-3af46438df69\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.024079 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vhd\" (UniqueName: \"kubernetes.io/projected/be793dd5-2676-4289-961a-9e6c0731b13a-kube-api-access-h9vhd\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-rwftw\" (UID: \"be793dd5-2676-4289-961a-9e6c0731b13a\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.049588 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllp7\" (UniqueName: \"kubernetes.io/projected/55b45b25-e171-4e43-8da0-b18c06e7515b-kube-api-access-wllp7\") pod \"manila-operator-controller-manager-65d89cfd9f-l6h5v\" (UID: \"55b45b25-e171-4e43-8da0-b18c06e7515b\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.049823 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2khj\" (UniqueName: \"kubernetes.io/projected/4a2174c3-a953-4535-9b70-5414c07633c0-kube-api-access-z2khj\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4\" (UID: \"4a2174c3-a953-4535-9b70-5414c07633c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.050414 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.054094 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-244ft" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.084468 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.085606 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.087336 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.090795 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vt4sd" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.091975 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092759 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctp4p\" (UniqueName: \"kubernetes.io/projected/c310d938-f1f0-4f85-90c3-f0625fc41848-kube-api-access-ctp4p\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092798 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnwsn\" (UniqueName: \"kubernetes.io/projected/3fa4af87-32c4-423c-98c1-9cb8b7db5da2-kube-api-access-lnwsn\") pod \"ovn-operator-controller-manager-6d8b6f9b9-dgn5z\" (UID: \"3fa4af87-32c4-423c-98c1-9cb8b7db5da2\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092819 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092875 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmvx\" (UniqueName: \"kubernetes.io/projected/62fe077d-cf16-4c42-95c3-39435d2c9042-kube-api-access-jsmvx\") pod \"octavia-operator-controller-manager-7468f855d8-lwz42\" (UID: \"62fe077d-cf16-4c42-95c3-39435d2c9042\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mml7w\" (UniqueName: \"kubernetes.io/projected/b1d7d818-5c22-4f23-9d2e-0459f36de335-kube-api-access-mml7w\") pod \"placement-operator-controller-manager-54689d9f88-26sbg\" (UID: \"b1d7d818-5c22-4f23-9d2e-0459f36de335\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092919 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rh2\" (UniqueName: \"kubernetes.io/projected/f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a-kube-api-access-n6rh2\") pod \"nova-operator-controller-manager-7c7fc454ff-rnxmp\" (UID: \"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.092935 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq84p\" (UniqueName: \"kubernetes.io/projected/eeac9611-70f4-4fc6-a161-01420d358164-kube-api-access-mq84p\") pod \"neutron-operator-controller-manager-8d984cc4d-gshfh\" (UID: \"eeac9611-70f4-4fc6-a161-01420d358164\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.098876 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.113359 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.123209 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.127178 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-k7h57" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.131833 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.132466 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.153043 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.154480 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.156605 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.158402 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.167624 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lr9zh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.170953 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.191087 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.192855 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.192960 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.194692 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-p42bt" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.194852 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvxp\" (UniqueName: \"kubernetes.io/projected/d39b8507-a457-4bdb-95ce-e20abf48c406-kube-api-access-ssvxp\") pod \"telemetry-operator-controller-manager-5d4d74dd89-rcqrk\" (UID: \"d39b8507-a457-4bdb-95ce-e20abf48c406\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.194927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmvx\" (UniqueName: \"kubernetes.io/projected/62fe077d-cf16-4c42-95c3-39435d2c9042-kube-api-access-jsmvx\") pod \"octavia-operator-controller-manager-7468f855d8-lwz42\" (UID: \"62fe077d-cf16-4c42-95c3-39435d2c9042\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.194954 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mml7w\" (UniqueName: \"kubernetes.io/projected/b1d7d818-5c22-4f23-9d2e-0459f36de335-kube-api-access-mml7w\") pod \"placement-operator-controller-manager-54689d9f88-26sbg\" (UID: \"b1d7d818-5c22-4f23-9d2e-0459f36de335\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.194992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rh2\" (UniqueName: \"kubernetes.io/projected/f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a-kube-api-access-n6rh2\") pod \"nova-operator-controller-manager-7c7fc454ff-rnxmp\" (UID: \"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qk6z\" (UniqueName: \"kubernetes.io/projected/eef43156-170e-4dd4-abf1-77fa4763c4b8-kube-api-access-6qk6z\") pod \"swift-operator-controller-manager-6859f9b676-mgxlm\" (UID: \"eef43156-170e-4dd4-abf1-77fa4763c4b8\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195061 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq84p\" (UniqueName: \"kubernetes.io/projected/eeac9611-70f4-4fc6-a161-01420d358164-kube-api-access-mq84p\") pod \"neutron-operator-controller-manager-8d984cc4d-gshfh\" (UID: \"eeac9611-70f4-4fc6-a161-01420d358164\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195113 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctp4p\" (UniqueName: \"kubernetes.io/projected/c310d938-f1f0-4f85-90c3-f0625fc41848-kube-api-access-ctp4p\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195162 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7l9\" (UniqueName: \"kubernetes.io/projected/29940c9b-e33d-432a-86e1-e552ce1cefdd-kube-api-access-zp7l9\") pod \"test-operator-controller-manager-5cd5cb47d7-tb2fm\" (UID: \"29940c9b-e33d-432a-86e1-e552ce1cefdd\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195188 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnwsn\" (UniqueName: \"kubernetes.io/projected/3fa4af87-32c4-423c-98c1-9cb8b7db5da2-kube-api-access-lnwsn\") pod \"ovn-operator-controller-manager-6d8b6f9b9-dgn5z\" (UID: \"3fa4af87-32c4-423c-98c1-9cb8b7db5da2\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.195212 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.195333 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.195394 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert podName:c310d938-f1f0-4f85-90c3-f0625fc41848 nodeName:}" failed. No retries permitted until 2025-10-07 12:43:18.695371808 +0000 UTC m=+936.771158646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" (UID: "c310d938-f1f0-4f85-90c3-f0625fc41848") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.216260 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.227924 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctp4p\" (UniqueName: \"kubernetes.io/projected/c310d938-f1f0-4f85-90c3-f0625fc41848-kube-api-access-ctp4p\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.228401 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rh2\" (UniqueName: \"kubernetes.io/projected/f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a-kube-api-access-n6rh2\") pod \"nova-operator-controller-manager-7c7fc454ff-rnxmp\" (UID: \"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.230224 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmvx\" (UniqueName: \"kubernetes.io/projected/62fe077d-cf16-4c42-95c3-39435d2c9042-kube-api-access-jsmvx\") pod \"octavia-operator-controller-manager-7468f855d8-lwz42\" (UID: \"62fe077d-cf16-4c42-95c3-39435d2c9042\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.231820 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.243542 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mml7w\" (UniqueName: \"kubernetes.io/projected/b1d7d818-5c22-4f23-9d2e-0459f36de335-kube-api-access-mml7w\") pod \"placement-operator-controller-manager-54689d9f88-26sbg\" (UID: \"b1d7d818-5c22-4f23-9d2e-0459f36de335\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.244341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq84p\" (UniqueName: \"kubernetes.io/projected/eeac9611-70f4-4fc6-a161-01420d358164-kube-api-access-mq84p\") pod \"neutron-operator-controller-manager-8d984cc4d-gshfh\" (UID: \"eeac9611-70f4-4fc6-a161-01420d358164\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.255702 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnwsn\" (UniqueName: \"kubernetes.io/projected/3fa4af87-32c4-423c-98c1-9cb8b7db5da2-kube-api-access-lnwsn\") pod \"ovn-operator-controller-manager-6d8b6f9b9-dgn5z\" (UID: \"3fa4af87-32c4-423c-98c1-9cb8b7db5da2\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.261941 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.263353 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.266586 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.267283 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-tkmw7" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.284241 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.285277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.299345 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpz79\" (UniqueName: \"kubernetes.io/projected/5d4c59de-cd84-49b2-b320-3217d5cc31f3-kube-api-access-bpz79\") pod \"watcher-operator-controller-manager-6cbc6dd547-rbwnk\" (UID: \"5d4c59de-cd84-49b2-b320-3217d5cc31f3\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.307470 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvxp\" (UniqueName: \"kubernetes.io/projected/d39b8507-a457-4bdb-95ce-e20abf48c406-kube-api-access-ssvxp\") pod \"telemetry-operator-controller-manager-5d4d74dd89-rcqrk\" (UID: \"d39b8507-a457-4bdb-95ce-e20abf48c406\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.307700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qk6z\" (UniqueName: \"kubernetes.io/projected/eef43156-170e-4dd4-abf1-77fa4763c4b8-kube-api-access-6qk6z\") pod \"swift-operator-controller-manager-6859f9b676-mgxlm\" (UID: \"eef43156-170e-4dd4-abf1-77fa4763c4b8\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.307876 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7l9\" (UniqueName: \"kubernetes.io/projected/29940c9b-e33d-432a-86e1-e552ce1cefdd-kube-api-access-zp7l9\") pod \"test-operator-controller-manager-5cd5cb47d7-tb2fm\" (UID: \"29940c9b-e33d-432a-86e1-e552ce1cefdd\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.316993 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.319833 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.322558 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.329358 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.330319 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tl6g2" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.333086 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvxp\" (UniqueName: \"kubernetes.io/projected/d39b8507-a457-4bdb-95ce-e20abf48c406-kube-api-access-ssvxp\") pod \"telemetry-operator-controller-manager-5d4d74dd89-rcqrk\" (UID: \"d39b8507-a457-4bdb-95ce-e20abf48c406\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.333496 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.345767 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7l9\" (UniqueName: \"kubernetes.io/projected/29940c9b-e33d-432a-86e1-e552ce1cefdd-kube-api-access-zp7l9\") pod \"test-operator-controller-manager-5cd5cb47d7-tb2fm\" (UID: \"29940c9b-e33d-432a-86e1-e552ce1cefdd\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.346501 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qk6z\" (UniqueName: \"kubernetes.io/projected/eef43156-170e-4dd4-abf1-77fa4763c4b8-kube-api-access-6qk6z\") pod \"swift-operator-controller-manager-6859f9b676-mgxlm\" (UID: \"eef43156-170e-4dd4-abf1-77fa4763c4b8\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.354648 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.355547 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.358104 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4m7xf" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.370440 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm"] Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.386454 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.394726 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.403409 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.418283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpz79\" (UniqueName: \"kubernetes.io/projected/5d4c59de-cd84-49b2-b320-3217d5cc31f3-kube-api-access-bpz79\") pod \"watcher-operator-controller-manager-6cbc6dd547-rbwnk\" (UID: \"5d4c59de-cd84-49b2-b320-3217d5cc31f3\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.418346 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58wsg\" (UniqueName: \"kubernetes.io/projected/01b8fb04-a40d-4e7b-be35-25f4450ec199-kube-api-access-58wsg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sfklm\" (UID: \"01b8fb04-a40d-4e7b-be35-25f4450ec199\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.418418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.418457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.418501 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vrf\" (UniqueName: \"kubernetes.io/projected/4d780f6f-e916-4f61-922f-bbaeceb4db7c-kube-api-access-l2vrf\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.455957 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpz79\" (UniqueName: \"kubernetes.io/projected/5d4c59de-cd84-49b2-b320-3217d5cc31f3-kube-api-access-bpz79\") pod \"watcher-operator-controller-manager-6cbc6dd547-rbwnk\" (UID: \"5d4c59de-cd84-49b2-b320-3217d5cc31f3\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.467939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3c98f6-0e02-4493-b3d6-f030d73ca3ac-cert\") pod \"infra-operator-controller-manager-658588b8c9-7fsjz\" (UID: \"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.468215 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.492874 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.497664 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.521784 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.521857 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2vrf\" (UniqueName: \"kubernetes.io/projected/4d780f6f-e916-4f61-922f-bbaeceb4db7c-kube-api-access-l2vrf\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.521955 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58wsg\" (UniqueName: \"kubernetes.io/projected/01b8fb04-a40d-4e7b-be35-25f4450ec199-kube-api-access-58wsg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sfklm\" (UID: \"01b8fb04-a40d-4e7b-be35-25f4450ec199\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.522923 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.522990 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert podName:4d780f6f-e916-4f61-922f-bbaeceb4db7c nodeName:}" failed. No retries permitted until 2025-10-07 12:43:19.022974892 +0000 UTC m=+937.098761730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert") pod "openstack-operator-controller-manager-68c8546c8b-bc9s5" (UID: "4d780f6f-e916-4f61-922f-bbaeceb4db7c") : secret "webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.546200 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58wsg\" (UniqueName: \"kubernetes.io/projected/01b8fb04-a40d-4e7b-be35-25f4450ec199-kube-api-access-58wsg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sfklm\" (UID: \"01b8fb04-a40d-4e7b-be35-25f4450ec199\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.546623 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2vrf\" (UniqueName: \"kubernetes.io/projected/4d780f6f-e916-4f61-922f-bbaeceb4db7c-kube-api-access-l2vrf\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.601008 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.726388 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.726622 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: E1007 12:43:18.726700 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert podName:c310d938-f1f0-4f85-90c3-f0625fc41848 nodeName:}" failed. No retries permitted until 2025-10-07 12:43:19.726679994 +0000 UTC m=+937.802466872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" (UID: "c310d938-f1f0-4f85-90c3-f0625fc41848") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 07 12:43:18 crc kubenswrapper[5024]: I1007 12:43:18.746650 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.036938 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.039791 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.047589 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d780f6f-e916-4f61-922f-bbaeceb4db7c-cert\") pod \"openstack-operator-controller-manager-68c8546c8b-bc9s5\" (UID: \"4d780f6f-e916-4f61-922f-bbaeceb4db7c\") " pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.090839 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.199251 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.362511 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.372400 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75de1768_d533_460a_8397_012ef25ade39.slice/crio-0632e5e3a174dd414e6fe5e795e84cdf12d16a6db24921474e0347582d19b091 WatchSource:0}: Error finding container 0632e5e3a174dd414e6fe5e795e84cdf12d16a6db24921474e0347582d19b091: Status 404 returned error can't find the container with id 0632e5e3a174dd414e6fe5e795e84cdf12d16a6db24921474e0347582d19b091 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.392607 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.404022 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.416309 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df3da1a_3dc0_400e_a3a6_4878652ecfdc.slice/crio-012da6a7a0a7b8e6e6503725929c76cb818743b2fb955cf984c3498c65e5fe12 WatchSource:0}: Error finding container 012da6a7a0a7b8e6e6503725929c76cb818743b2fb955cf984c3498c65e5fe12: Status 404 returned error can't find the container with id 012da6a7a0a7b8e6e6503725929c76cb818743b2fb955cf984c3498c65e5fe12 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.560894 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-xscld"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.583818 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.590910 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c026852_45e6_4a05_bd27_3af46438df69.slice/crio-e668d88293153c5c353cff8b2f2238946d380915ea8ee8449cdc11bb1631e510 WatchSource:0}: Error finding container e668d88293153c5c353cff8b2f2238946d380915ea8ee8449cdc11bb1631e510: Status 404 returned error can't find the container with id e668d88293153c5c353cff8b2f2238946d380915ea8ee8449cdc11bb1631e510 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.597284 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.602052 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.616729 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2285083_77e3_448b_b4f0_27adfb683e17.slice/crio-dead24a487380ec1960396ea684205a946cf518753f4774ba36e279971178434 WatchSource:0}: Error finding container dead24a487380ec1960396ea684205a946cf518753f4774ba36e279971178434: Status 404 returned error can't find the container with id dead24a487380ec1960396ea684205a946cf518753f4774ba36e279971178434 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.625725 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.627711 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fe077d_cf16_4c42_95c3_39435d2c9042.slice/crio-56d11a41d5caf6ef58b6ec7ac1c55e2526a31ebb9cae76df0dde83e3f793c347 WatchSource:0}: Error finding container 56d11a41d5caf6ef58b6ec7ac1c55e2526a31ebb9cae76df0dde83e3f793c347: Status 404 returned error can't find the container with id 56d11a41d5caf6ef58b6ec7ac1c55e2526a31ebb9cae76df0dde83e3f793c347 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.633637 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.634698 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeac9611_70f4_4fc6_a161_01420d358164.slice/crio-5fad89a5c655df09a15af72b68aa84446cf7f83881c4e461837c992b399632f0 WatchSource:0}: Error finding container 5fad89a5c655df09a15af72b68aa84446cf7f83881c4e461837c992b399632f0: Status 404 returned error can't find the container with id 5fad89a5c655df09a15af72b68aa84446cf7f83881c4e461837c992b399632f0 Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.753289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.760000 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c310d938-f1f0-4f85-90c3-f0625fc41848-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb\" (UID: \"c310d938-f1f0-4f85-90c3-f0625fc41848\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.782978 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" event={"ID":"8c026852-45e6-4a05-bd27-3af46438df69","Type":"ContainerStarted","Data":"e668d88293153c5c353cff8b2f2238946d380915ea8ee8449cdc11bb1631e510"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.784608 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" event={"ID":"52744582-1aca-4f75-8dc3-337a19ab3fba","Type":"ContainerStarted","Data":"ba9a48e5ac4bbcf03ee098347b7caa7a15cbde7fc594b2011c4a1ba7c632090b"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.786067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" event={"ID":"eeac9611-70f4-4fc6-a161-01420d358164","Type":"ContainerStarted","Data":"5fad89a5c655df09a15af72b68aa84446cf7f83881c4e461837c992b399632f0"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.787344 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" event={"ID":"be793dd5-2676-4289-961a-9e6c0731b13a","Type":"ContainerStarted","Data":"b58491b7024613caf377a0f0e6099d1dd5ce3055dd4158ae206d8a51757f1944"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.788502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" event={"ID":"4a2174c3-a953-4535-9b70-5414c07633c0","Type":"ContainerStarted","Data":"2667af187d1459a790f794d0549cd867ed076924515aaabd1dd013841d1370c5"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.790008 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" event={"ID":"f2285083-77e3-448b-b4f0-27adfb683e17","Type":"ContainerStarted","Data":"dead24a487380ec1960396ea684205a946cf518753f4774ba36e279971178434"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.791067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" event={"ID":"7df3da1a-3dc0-400e-a3a6-4878652ecfdc","Type":"ContainerStarted","Data":"012da6a7a0a7b8e6e6503725929c76cb818743b2fb955cf984c3498c65e5fe12"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.791960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" event={"ID":"39116473-582b-4f61-b3c8-44ab955c277b","Type":"ContainerStarted","Data":"100189fdeb500ab63301ace601aeb420062c9eca40c5f1da8df989b6d5e96e6f"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.793223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" event={"ID":"62fe077d-cf16-4c42-95c3-39435d2c9042","Type":"ContainerStarted","Data":"56d11a41d5caf6ef58b6ec7ac1c55e2526a31ebb9cae76df0dde83e3f793c347"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.795232 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" event={"ID":"a1d2d630-766f-4486-b909-6f622bdc9748","Type":"ContainerStarted","Data":"0b0a2b3608631551f661d39e5a93f68a970629d5cf1e935a1fe8ba5984cb94cb"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.797197 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" event={"ID":"75de1768-d533-460a-8397-012ef25ade39","Type":"ContainerStarted","Data":"0632e5e3a174dd414e6fe5e795e84cdf12d16a6db24921474e0347582d19b091"} Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.838328 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.838941 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.848301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.862171 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.881888 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.886850 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.891556 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm"] Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.893634 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnwsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6d8b6f9b9-dgn5z_openstack-operators(3fa4af87-32c4-423c-98c1-9cb8b7db5da2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.900972 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.904852 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.909492 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm"] Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.914717 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.917818 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39b8507_a457_4bdb_95ce_e20abf48c406.slice/crio-fcd9c7ec0905f6cea26d939c354c0f863e1f28473e68eaf4b82e7bad4891d0b0 WatchSource:0}: Error finding container fcd9c7ec0905f6cea26d939c354c0f863e1f28473e68eaf4b82e7bad4891d0b0: Status 404 returned error can't find the container with id fcd9c7ec0905f6cea26d939c354c0f863e1f28473e68eaf4b82e7bad4891d0b0 Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.920894 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssvxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d4d74dd89-rcqrk_openstack-operators(d39b8507-a457-4bdb-95ce-e20abf48c406): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: I1007 12:43:19.923466 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5"] Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.924687 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d780f6f_e916_4f61_922f_bbaeceb4db7c.slice/crio-16390b2d1862a02599313112b0608297c9c517254bb9280429e83a12463ca0f8 WatchSource:0}: Error finding container 16390b2d1862a02599313112b0608297c9c517254bb9280429e83a12463ca0f8: Status 404 returned error can't find the container with id 16390b2d1862a02599313112b0608297c9c517254bb9280429e83a12463ca0f8 Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.934221 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b8fb04_a40d_4e7b_be35_25f4450ec199.slice/crio-b84a5b4c74a89dee7909c83dbdd14cc6aa39310e4e3049ebb2ab30b1d4257011 WatchSource:0}: Error finding container b84a5b4c74a89dee7909c83dbdd14cc6aa39310e4e3049ebb2ab30b1d4257011: Status 404 returned error can't find the container with id b84a5b4c74a89dee7909c83dbdd14cc6aa39310e4e3049ebb2ab30b1d4257011 Oct 07 12:43:19 crc kubenswrapper[5024]: W1007 12:43:19.935870 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef43156_170e_4dd4_abf1_77fa4763c4b8.slice/crio-d65cef8f38f81645c571ca72df25abcd25a39477230f9efc0b69ff3f654f21a7 WatchSource:0}: Error finding container d65cef8f38f81645c571ca72df25abcd25a39477230f9efc0b69ff3f654f21a7: Status 404 returned error can't find the container with id d65cef8f38f81645c571ca72df25abcd25a39477230f9efc0b69ff3f654f21a7 Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.939785 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6859f9b676-mgxlm_openstack-operators(eef43156-170e-4dd4-abf1-77fa4763c4b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.939908 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58wsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-sfklm_openstack-operators(01b8fb04-a40d-4e7b-be35-25f4450ec199): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.943090 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" podUID="01b8fb04-a40d-4e7b-be35-25f4450ec199" Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.961585 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zp7l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd5cb47d7-tb2fm_openstack-operators(29940c9b-e33d-432a-86e1-e552ce1cefdd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.961589 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6rh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7c7fc454ff-rnxmp_openstack-operators(f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:19 crc kubenswrapper[5024]: E1007 12:43:19.962252 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpz79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6cbc6dd547-rbwnk_openstack-operators(5d4c59de-cd84-49b2-b320-3217d5cc31f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.135295 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb"] Oct 07 12:43:20 crc kubenswrapper[5024]: W1007 12:43:20.151571 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc310d938_f1f0_4f85_90c3_f0625fc41848.slice/crio-a29226c34d79e4976bcaa112d4e81554535bcb816d9d76238a50184511d086bd WatchSource:0}: Error finding container a29226c34d79e4976bcaa112d4e81554535bcb816d9d76238a50184511d086bd: Status 404 returned error can't find the container with id a29226c34d79e4976bcaa112d4e81554535bcb816d9d76238a50184511d086bd Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.212503 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" podUID="3fa4af87-32c4-423c-98c1-9cb8b7db5da2" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.217869 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" podUID="eef43156-170e-4dd4-abf1-77fa4763c4b8" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.231736 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" podUID="d39b8507-a457-4bdb-95ce-e20abf48c406" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.243634 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" podUID="f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.305078 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" podUID="29940c9b-e33d-432a-86e1-e552ce1cefdd" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.314185 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" podUID="5d4c59de-cd84-49b2-b320-3217d5cc31f3" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.873690 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" event={"ID":"3fa4af87-32c4-423c-98c1-9cb8b7db5da2","Type":"ContainerStarted","Data":"0de0d5b8e46ee9ba65607ce96d3afe236c2a1ee592da05d3e7bc14fc85232509"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.874126 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" event={"ID":"3fa4af87-32c4-423c-98c1-9cb8b7db5da2","Type":"ContainerStarted","Data":"efefc52a4b06862b774cd81c598ef50b63a63b18b71e6d5a039395313ac85cb6"} Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.875672 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" podUID="3fa4af87-32c4-423c-98c1-9cb8b7db5da2" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.880863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" event={"ID":"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac","Type":"ContainerStarted","Data":"326b04f2a0c3bb788e9594bd0d82790001862bc2a6e90fcf69afc4dce5594060"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.884489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" event={"ID":"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a","Type":"ContainerStarted","Data":"9e8d4695d30a2b238679a3e06f1e60c17f3873ea2a26386977f569c21fc030e4"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.884541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" event={"ID":"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a","Type":"ContainerStarted","Data":"71d200c7ba10eba44cb5055ab94ccea04b1a01b89e3adadf0dbc8da57532f737"} Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.886435 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" podUID="f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.903123 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" event={"ID":"4d780f6f-e916-4f61-922f-bbaeceb4db7c","Type":"ContainerStarted","Data":"dcefa9819e60d1cddb797d340f8aab404cf702106f563a2ad003781993b46b65"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.903283 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" event={"ID":"4d780f6f-e916-4f61-922f-bbaeceb4db7c","Type":"ContainerStarted","Data":"416af4ab9ed6ba2a522923568d05ce26a9ff0176f583ae66260c8061d4a58494"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.903321 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" event={"ID":"4d780f6f-e916-4f61-922f-bbaeceb4db7c","Type":"ContainerStarted","Data":"16390b2d1862a02599313112b0608297c9c517254bb9280429e83a12463ca0f8"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.904027 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.923863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" event={"ID":"d39b8507-a457-4bdb-95ce-e20abf48c406","Type":"ContainerStarted","Data":"81400a85bbe9e386f8449e3d6db4fcffd3d124f23b488a9c83e6b498bbd4134f"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.923946 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" event={"ID":"d39b8507-a457-4bdb-95ce-e20abf48c406","Type":"ContainerStarted","Data":"fcd9c7ec0905f6cea26d939c354c0f863e1f28473e68eaf4b82e7bad4891d0b0"} Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.931699 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" podUID="d39b8507-a457-4bdb-95ce-e20abf48c406" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.940706 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" event={"ID":"55b45b25-e171-4e43-8da0-b18c06e7515b","Type":"ContainerStarted","Data":"2f4a0056ee86ea01abbfce6be233fe1bd5e8ad34d9035672295235f34f97c1e9"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.953751 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" event={"ID":"5d4c59de-cd84-49b2-b320-3217d5cc31f3","Type":"ContainerStarted","Data":"b7fbe50c5bc47b9ab14cc694f1e527bf4e51da5f069eb79ce28e0d8d243e867c"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.953809 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" event={"ID":"5d4c59de-cd84-49b2-b320-3217d5cc31f3","Type":"ContainerStarted","Data":"f5897ee06007a418a753d87155a1909de4049857699fba1e6101f1f64ac363e4"} Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.961003 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" podUID="5d4c59de-cd84-49b2-b320-3217d5cc31f3" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.961363 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" event={"ID":"01b8fb04-a40d-4e7b-be35-25f4450ec199","Type":"ContainerStarted","Data":"b84a5b4c74a89dee7909c83dbdd14cc6aa39310e4e3049ebb2ab30b1d4257011"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.962841 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" event={"ID":"c310d938-f1f0-4f85-90c3-f0625fc41848","Type":"ContainerStarted","Data":"a29226c34d79e4976bcaa112d4e81554535bcb816d9d76238a50184511d086bd"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.967021 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" podStartSLOduration=2.967001919 podStartE2EDuration="2.967001919s" podCreationTimestamp="2025-10-07 12:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:43:20.963410434 +0000 UTC m=+939.039197272" watchObservedRunningTime="2025-10-07 12:43:20.967001919 +0000 UTC m=+939.042788757" Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.970066 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" podUID="01b8fb04-a40d-4e7b-be35-25f4450ec199" Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.973578 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" event={"ID":"eef43156-170e-4dd4-abf1-77fa4763c4b8","Type":"ContainerStarted","Data":"6821727637b295891a57ef92021b4a0c492607a9ad9a884fc2e943fa6d164043"} Oct 07 12:43:20 crc kubenswrapper[5024]: I1007 12:43:20.973642 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" event={"ID":"eef43156-170e-4dd4-abf1-77fa4763c4b8","Type":"ContainerStarted","Data":"d65cef8f38f81645c571ca72df25abcd25a39477230f9efc0b69ff3f654f21a7"} Oct 07 12:43:20 crc kubenswrapper[5024]: E1007 12:43:20.985469 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" podUID="eef43156-170e-4dd4-abf1-77fa4763c4b8" Oct 07 12:43:21 crc kubenswrapper[5024]: I1007 12:43:21.017402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" event={"ID":"b1d7d818-5c22-4f23-9d2e-0459f36de335","Type":"ContainerStarted","Data":"64061a5567067fcb4b21c5e1ea9dd832891d34be5895029163d8ace02be13973"} Oct 07 12:43:21 crc kubenswrapper[5024]: I1007 12:43:21.021263 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" event={"ID":"29940c9b-e33d-432a-86e1-e552ce1cefdd","Type":"ContainerStarted","Data":"44398b479de6796d1482afacd139e9ee976cdd9f18ff27b43026a65cc1f33fbd"} Oct 07 12:43:21 crc kubenswrapper[5024]: I1007 12:43:21.021308 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" event={"ID":"29940c9b-e33d-432a-86e1-e552ce1cefdd","Type":"ContainerStarted","Data":"69fe5ac43a664cd27a2448b73d17c8a1615a617517001ab3cee66fa4049a3c0a"} Oct 07 12:43:21 crc kubenswrapper[5024]: E1007 12:43:21.022911 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" podUID="29940c9b-e33d-432a-86e1-e552ce1cefdd" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035023 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" podUID="eef43156-170e-4dd4-abf1-77fa4763c4b8" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035083 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" podUID="d39b8507-a457-4bdb-95ce-e20abf48c406" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035093 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" podUID="3fa4af87-32c4-423c-98c1-9cb8b7db5da2" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035152 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" podUID="5d4c59de-cd84-49b2-b320-3217d5cc31f3" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035153 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" podUID="01b8fb04-a40d-4e7b-be35-25f4450ec199" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.035164 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" podUID="29940c9b-e33d-432a-86e1-e552ce1cefdd" Oct 07 12:43:22 crc kubenswrapper[5024]: E1007 12:43:22.038232 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" podUID="f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a" Oct 07 12:43:29 crc kubenswrapper[5024]: I1007 12:43:29.209458 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68c8546c8b-bc9s5" Oct 07 12:43:31 crc kubenswrapper[5024]: E1007 12:43:31.642833 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:b6ab8fc3ad425eca2e073fe9ba9d5b29d9ea4d9814de7bb799fa330209566cd4" Oct 07 12:43:31 crc kubenswrapper[5024]: E1007 12:43:31.643400 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:b6ab8fc3ad425eca2e073fe9ba9d5b29d9ea4d9814de7bb799fa330209566cd4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ghg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-76d5b87f47-qnjtl_openstack-operators(f2285083-77e3-448b-b4f0-27adfb683e17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:43:32 crc kubenswrapper[5024]: E1007 12:43:32.521068 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b" Oct 07 12:43:32 crc kubenswrapper[5024]: E1007 12:43:32.521285 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9vhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b5ccf6d9c-rwftw_openstack-operators(be793dd5-2676-4289-961a-9e6c0731b13a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:43:32 crc kubenswrapper[5024]: E1007 12:43:32.921073 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:84644287affe77dedadcb0064fbe24fa882436e4656529111fcd3ce5ea882d5e" Oct 07 12:43:32 crc kubenswrapper[5024]: E1007 12:43:32.921545 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:84644287affe77dedadcb0064fbe24fa882436e4656529111fcd3ce5ea882d5e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r2lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5dc44df7d5-dszfh_openstack-operators(7df3da1a-3dc0-400e-a3a6-4878652ecfdc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:43:33 crc kubenswrapper[5024]: E1007 12:43:33.227549 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799" Oct 07 12:43:33 crc kubenswrapper[5024]: E1007 12:43:33.228118 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctp4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb_openstack-operators(c310d938-f1f0-4f85-90c3-f0625fc41848): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:43:33 crc kubenswrapper[5024]: E1007 12:43:33.621198 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f" Oct 07 12:43:33 crc kubenswrapper[5024]: E1007 12:43:33.621429 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9zg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-658588b8c9-7fsjz_openstack-operators(0a3c98f6-0e02-4493-b3d6-f030d73ca3ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.046776 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" podUID="c310d938-f1f0-4f85-90c3-f0625fc41848" Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.046863 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" podUID="0a3c98f6-0e02-4493-b3d6-f030d73ca3ac" Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.071944 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" podUID="7df3da1a-3dc0-400e-a3a6-4878652ecfdc" Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.074722 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" podUID="f2285083-77e3-448b-b4f0-27adfb683e17" Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.077099 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" podUID="be793dd5-2676-4289-961a-9e6c0731b13a" Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.125325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" event={"ID":"be793dd5-2676-4289-961a-9e6c0731b13a","Type":"ContainerStarted","Data":"71c20b2c3c326ff7c89d4c79ad81dedcd9a2f25fe7f02ac5ff70598964481dfa"} Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.136001 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" event={"ID":"55b45b25-e171-4e43-8da0-b18c06e7515b","Type":"ContainerStarted","Data":"f96cd17b92578a953afd10f02c40df89ffddab5399d66bdebfaa87d826e6b8d0"} Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.137674 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" podUID="be793dd5-2676-4289-961a-9e6c0731b13a" Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.145851 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" event={"ID":"7df3da1a-3dc0-400e-a3a6-4878652ecfdc","Type":"ContainerStarted","Data":"3fde2f8d5df8e00d2b6009f04d74341640894d1ad9f8f62879b8570001cd627f"} Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.148241 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:84644287affe77dedadcb0064fbe24fa882436e4656529111fcd3ce5ea882d5e\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" podUID="7df3da1a-3dc0-400e-a3a6-4878652ecfdc" Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.155682 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" event={"ID":"f2285083-77e3-448b-b4f0-27adfb683e17","Type":"ContainerStarted","Data":"8a3d97be7243202368cec6f0c1b982495082e969bbcf1cb20d7ed3a1f9813691"} Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.163414 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b6ab8fc3ad425eca2e073fe9ba9d5b29d9ea4d9814de7bb799fa330209566cd4\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" podUID="f2285083-77e3-448b-b4f0-27adfb683e17" Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.165594 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" event={"ID":"c310d938-f1f0-4f85-90c3-f0625fc41848","Type":"ContainerStarted","Data":"05e4e0be775fe149d2cf45de4e28e4a4233d90734f858c789f915007556b1a03"} Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.170887 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" podUID="c310d938-f1f0-4f85-90c3-f0625fc41848" Oct 07 12:43:34 crc kubenswrapper[5024]: I1007 12:43:34.178517 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" event={"ID":"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac","Type":"ContainerStarted","Data":"0564a5e5b7052d3857fe40ab5f0fb65c9ef5ba4fc0a2f720241778358fd9bdbf"} Oct 07 12:43:34 crc kubenswrapper[5024]: E1007 12:43:34.181554 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" podUID="0a3c98f6-0e02-4493-b3d6-f030d73ca3ac" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.197289 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" event={"ID":"eeac9611-70f4-4fc6-a161-01420d358164","Type":"ContainerStarted","Data":"891b31336c160a1edbb44c95f11edcaf0c498321c8883981089419727200397d"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.197559 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" event={"ID":"eeac9611-70f4-4fc6-a161-01420d358164","Type":"ContainerStarted","Data":"8e4d122e7297f3f0c6df81903c6f0955337f13832c41b445f96a9d3cdcd6eb61"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.197606 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.200000 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" event={"ID":"39116473-582b-4f61-b3c8-44ab955c277b","Type":"ContainerStarted","Data":"03147a5a8f3cb32b84f28a239b25128a89e2d0b6e9ef1ebe8bad58594c77a2dc"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.200026 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" event={"ID":"39116473-582b-4f61-b3c8-44ab955c277b","Type":"ContainerStarted","Data":"54cc84e960396b7d3520f4ac14a0264c96401be320707a1fb42ab5f65e6f14bb"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.200621 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.231220 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" event={"ID":"b1d7d818-5c22-4f23-9d2e-0459f36de335","Type":"ContainerStarted","Data":"83172a8f82993c5620398fb535c17ff42cf7ec4a29629664ab557518a54e8d6c"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.231260 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" event={"ID":"b1d7d818-5c22-4f23-9d2e-0459f36de335","Type":"ContainerStarted","Data":"ec3519dc4ec05c3b8c312cf6ba01d345336f2c0b5ea8fb5bdc7fb9aa6d198fff"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.231282 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.254805 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" event={"ID":"62fe077d-cf16-4c42-95c3-39435d2c9042","Type":"ContainerStarted","Data":"aa66d3e7580c2d676cfec478931a1d3fe9c6d1e8aeb11456875561a83184fb8e"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.254853 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" event={"ID":"62fe077d-cf16-4c42-95c3-39435d2c9042","Type":"ContainerStarted","Data":"0467e86cc8b9e18c6e7c5e7cb07434502014147757c21c21cafddf8ba9dadd36"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.255482 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.260480 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" podStartSLOduration=4.214369191 podStartE2EDuration="18.260462153s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.638469203 +0000 UTC m=+937.714256041" lastFinishedPulling="2025-10-07 12:43:33.684562165 +0000 UTC m=+951.760349003" observedRunningTime="2025-10-07 12:43:35.225499118 +0000 UTC m=+953.301285956" watchObservedRunningTime="2025-10-07 12:43:35.260462153 +0000 UTC m=+953.336248991" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.262558 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" podStartSLOduration=3.668642983 podStartE2EDuration="18.262551404s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.102196282 +0000 UTC m=+937.177983120" lastFinishedPulling="2025-10-07 12:43:33.696104703 +0000 UTC m=+951.771891541" observedRunningTime="2025-10-07 12:43:35.256913449 +0000 UTC m=+953.332700287" watchObservedRunningTime="2025-10-07 12:43:35.262551404 +0000 UTC m=+953.338338242" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.271793 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" event={"ID":"75de1768-d533-460a-8397-012ef25ade39","Type":"ContainerStarted","Data":"ff7a3c3d19b1bf4e46a33d41f882546c6bd4c054be3ae54e0c77514b398f78a4"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.271863 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.280924 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" event={"ID":"55b45b25-e171-4e43-8da0-b18c06e7515b","Type":"ContainerStarted","Data":"e22be40dc0f8927112c572a139df55b01ddbf269ada9db733723329e74a96350"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.281537 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.285010 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" podStartSLOduration=4.222141219 podStartE2EDuration="18.285000552s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.630241972 +0000 UTC m=+937.706028810" lastFinishedPulling="2025-10-07 12:43:33.693101305 +0000 UTC m=+951.768888143" observedRunningTime="2025-10-07 12:43:35.282285823 +0000 UTC m=+953.358072661" watchObservedRunningTime="2025-10-07 12:43:35.285000552 +0000 UTC m=+953.360787390" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.286285 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" event={"ID":"4a2174c3-a953-4535-9b70-5414c07633c0","Type":"ContainerStarted","Data":"798bd1c281d6ad6729c013e9d5db4be98a236cb107851a1f1364544b4476217e"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.298390 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" event={"ID":"8c026852-45e6-4a05-bd27-3af46438df69","Type":"ContainerStarted","Data":"81011a36fc79205263517181879a3057373c2ce1256b4ba2f2c247c866a99da5"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.298446 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" event={"ID":"8c026852-45e6-4a05-bd27-3af46438df69","Type":"ContainerStarted","Data":"c0c1f8c8f1ef4d07f168e6610c7aeb88ef13330407d04d0937f3a50054d083f3"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.299176 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.303562 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" event={"ID":"52744582-1aca-4f75-8dc3-337a19ab3fba","Type":"ContainerStarted","Data":"e2826bda28eb78312aa7991ceac999989a5ae521b115ce843b1a057761ba1cca"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.303926 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.310338 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" podStartSLOduration=4.480622987 podStartE2EDuration="18.310316975s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.866366594 +0000 UTC m=+937.942153432" lastFinishedPulling="2025-10-07 12:43:33.696060582 +0000 UTC m=+951.771847420" observedRunningTime="2025-10-07 12:43:35.308868772 +0000 UTC m=+953.384655610" watchObservedRunningTime="2025-10-07 12:43:35.310316975 +0000 UTC m=+953.386103813" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.323577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" event={"ID":"a1d2d630-766f-4486-b909-6f622bdc9748","Type":"ContainerStarted","Data":"130967bcb616a8de8b78c05ae827816f3936fac290878afb20b7c1d1eb9c8241"} Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.323613 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.323625 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" event={"ID":"a1d2d630-766f-4486-b909-6f622bdc9748","Type":"ContainerStarted","Data":"ce0fbb22313aab538936eaa2002f6a1e6803d22e25923892cf7d670865448b67"} Oct 07 12:43:35 crc kubenswrapper[5024]: E1007 12:43:35.329300 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" podUID="0a3c98f6-0e02-4493-b3d6-f030d73ca3ac" Oct 07 12:43:35 crc kubenswrapper[5024]: E1007 12:43:35.329548 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" podUID="c310d938-f1f0-4f85-90c3-f0625fc41848" Oct 07 12:43:35 crc kubenswrapper[5024]: E1007 12:43:35.329600 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:84644287affe77dedadcb0064fbe24fa882436e4656529111fcd3ce5ea882d5e\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" podUID="7df3da1a-3dc0-400e-a3a6-4878652ecfdc" Oct 07 12:43:35 crc kubenswrapper[5024]: E1007 12:43:35.329637 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" podUID="be793dd5-2676-4289-961a-9e6c0731b13a" Oct 07 12:43:35 crc kubenswrapper[5024]: E1007 12:43:35.329678 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b6ab8fc3ad425eca2e073fe9ba9d5b29d9ea4d9814de7bb799fa330209566cd4\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" podUID="f2285083-77e3-448b-b4f0-27adfb683e17" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.347443 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" podStartSLOduration=4.031085727 podStartE2EDuration="18.347428782s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.376804452 +0000 UTC m=+937.452591290" lastFinishedPulling="2025-10-07 12:43:33.693147497 +0000 UTC m=+951.768934345" observedRunningTime="2025-10-07 12:43:35.34599231 +0000 UTC m=+953.421779148" watchObservedRunningTime="2025-10-07 12:43:35.347428782 +0000 UTC m=+953.423215610" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.371860 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" podStartSLOduration=3.82034741 podStartE2EDuration="18.371844898s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.141673629 +0000 UTC m=+937.217460467" lastFinishedPulling="2025-10-07 12:43:33.693171117 +0000 UTC m=+951.768957955" observedRunningTime="2025-10-07 12:43:35.37021698 +0000 UTC m=+953.446003838" watchObservedRunningTime="2025-10-07 12:43:35.371844898 +0000 UTC m=+953.447631736" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.426616 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" podStartSLOduration=4.356386314 podStartE2EDuration="18.426596443s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.603637782 +0000 UTC m=+937.679424620" lastFinishedPulling="2025-10-07 12:43:33.673847911 +0000 UTC m=+951.749634749" observedRunningTime="2025-10-07 12:43:35.402243769 +0000 UTC m=+953.478030597" watchObservedRunningTime="2025-10-07 12:43:35.426596443 +0000 UTC m=+953.502383281" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.428492 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" podStartSLOduration=4.606666171 podStartE2EDuration="18.428485579s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.874267705 +0000 UTC m=+937.950054543" lastFinishedPulling="2025-10-07 12:43:33.696087113 +0000 UTC m=+951.771873951" observedRunningTime="2025-10-07 12:43:35.422576565 +0000 UTC m=+953.498363403" watchObservedRunningTime="2025-10-07 12:43:35.428485579 +0000 UTC m=+953.504272417" Oct 07 12:43:35 crc kubenswrapper[5024]: I1007 12:43:35.449998 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" podStartSLOduration=4.174352418 podStartE2EDuration="18.449979049s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.407977496 +0000 UTC m=+937.483764334" lastFinishedPulling="2025-10-07 12:43:33.683604127 +0000 UTC m=+951.759390965" observedRunningTime="2025-10-07 12:43:35.448297379 +0000 UTC m=+953.524084227" watchObservedRunningTime="2025-10-07 12:43:35.449979049 +0000 UTC m=+953.525765887" Oct 07 12:43:36 crc kubenswrapper[5024]: I1007 12:43:36.330568 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" event={"ID":"4a2174c3-a953-4535-9b70-5414c07633c0","Type":"ContainerStarted","Data":"58224193194ed10601d108dbe63b64859ed600bdcc12cc054a02210e7735b729"} Oct 07 12:43:36 crc kubenswrapper[5024]: I1007 12:43:36.331225 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:36 crc kubenswrapper[5024]: I1007 12:43:36.333667 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" event={"ID":"52744582-1aca-4f75-8dc3-337a19ab3fba","Type":"ContainerStarted","Data":"f0b0d1738c23783fd62515f24c16aa1fd83bcfd06f4426f4ae273b103cb9d84d"} Oct 07 12:43:36 crc kubenswrapper[5024]: I1007 12:43:36.336945 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" event={"ID":"75de1768-d533-460a-8397-012ef25ade39","Type":"ContainerStarted","Data":"69741c6f7c3a4850435adfc23a66369e83bbe8461900cec36398c24f409a70f8"} Oct 07 12:43:36 crc kubenswrapper[5024]: I1007 12:43:36.353404 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" podStartSLOduration=5.287618222 podStartE2EDuration="19.353386721s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.617836808 +0000 UTC m=+937.693623646" lastFinishedPulling="2025-10-07 12:43:33.683605307 +0000 UTC m=+951.759392145" observedRunningTime="2025-10-07 12:43:36.351453115 +0000 UTC m=+954.427239953" watchObservedRunningTime="2025-10-07 12:43:36.353386721 +0000 UTC m=+954.429173559" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.371706 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" event={"ID":"f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a","Type":"ContainerStarted","Data":"ee96806212f62d988886fbbbf0521bbd4810906beda8aac8b53ee7bd8cdd9312"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.372273 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.374348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" event={"ID":"eef43156-170e-4dd4-abf1-77fa4763c4b8","Type":"ContainerStarted","Data":"b7bd746ce051ed33b054eb5815bac79750d6df4bfaebced88779a495ab11f88f"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.374590 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.376349 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" event={"ID":"d39b8507-a457-4bdb-95ce-e20abf48c406","Type":"ContainerStarted","Data":"3c12f6df84a5412e4849fceb98e4aa6df6c6b70d17afa2dccc23b55a7baa26c7"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.376619 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.378298 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" event={"ID":"5d4c59de-cd84-49b2-b320-3217d5cc31f3","Type":"ContainerStarted","Data":"8f764600a77ed16856fee57b5b21fd19f549056a781919695f54fd6767c1edc6"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.378615 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.380204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" event={"ID":"29940c9b-e33d-432a-86e1-e552ce1cefdd","Type":"ContainerStarted","Data":"290edc59f7005d01c803c1c1aa80daebb191d5b100923e83c2f46e7e668ce32c"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.381190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" event={"ID":"01b8fb04-a40d-4e7b-be35-25f4450ec199","Type":"ContainerStarted","Data":"755d60b5171832ce416bd9ad532d4db5aab172d67b5b49c29484ac37c2293e73"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.382653 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" event={"ID":"3fa4af87-32c4-423c-98c1-9cb8b7db5da2","Type":"ContainerStarted","Data":"b88d2a2550da1fbd0d89054509f5041be449296ecc9651bacb649b924fa2d8fa"} Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.382954 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.399171 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" podStartSLOduration=4.227906178 podStartE2EDuration="24.399151319s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.961451311 +0000 UTC m=+938.037238149" lastFinishedPulling="2025-10-07 12:43:40.132696452 +0000 UTC m=+958.208483290" observedRunningTime="2025-10-07 12:43:41.394027939 +0000 UTC m=+959.469814777" watchObservedRunningTime="2025-10-07 12:43:41.399151319 +0000 UTC m=+959.474938157" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.414750 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sfklm" podStartSLOduration=3.249788989 podStartE2EDuration="23.414725115s" podCreationTimestamp="2025-10-07 12:43:18 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.939852318 +0000 UTC m=+938.015639156" lastFinishedPulling="2025-10-07 12:43:40.104788444 +0000 UTC m=+958.180575282" observedRunningTime="2025-10-07 12:43:41.408376089 +0000 UTC m=+959.484162927" watchObservedRunningTime="2025-10-07 12:43:41.414725115 +0000 UTC m=+959.490511993" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.431991 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" podStartSLOduration=4.267059965 podStartE2EDuration="24.431973931s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.939655802 +0000 UTC m=+938.015442640" lastFinishedPulling="2025-10-07 12:43:40.104569768 +0000 UTC m=+958.180356606" observedRunningTime="2025-10-07 12:43:41.420777283 +0000 UTC m=+959.496564141" watchObservedRunningTime="2025-10-07 12:43:41.431973931 +0000 UTC m=+959.507760769" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.440203 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" podStartSLOduration=4.700701158 podStartE2EDuration="24.440184152s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.920687346 +0000 UTC m=+937.996474184" lastFinishedPulling="2025-10-07 12:43:39.66017034 +0000 UTC m=+957.735957178" observedRunningTime="2025-10-07 12:43:41.436904436 +0000 UTC m=+959.512691274" watchObservedRunningTime="2025-10-07 12:43:41.440184152 +0000 UTC m=+959.515970990" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.453814 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" podStartSLOduration=4.429991647 podStartE2EDuration="23.453797801s" podCreationTimestamp="2025-10-07 12:43:18 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.961438681 +0000 UTC m=+938.037225519" lastFinishedPulling="2025-10-07 12:43:38.985244825 +0000 UTC m=+957.061031673" observedRunningTime="2025-10-07 12:43:41.449494615 +0000 UTC m=+959.525281453" watchObservedRunningTime="2025-10-07 12:43:41.453797801 +0000 UTC m=+959.529584639" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.469959 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" podStartSLOduration=4.2299350669999995 podStartE2EDuration="24.469943134s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.893471568 +0000 UTC m=+937.969258406" lastFinishedPulling="2025-10-07 12:43:40.133479635 +0000 UTC m=+958.209266473" observedRunningTime="2025-10-07 12:43:41.466999188 +0000 UTC m=+959.542786026" watchObservedRunningTime="2025-10-07 12:43:41.469943134 +0000 UTC m=+959.545729972" Oct 07 12:43:41 crc kubenswrapper[5024]: I1007 12:43:41.484445 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" podStartSLOduration=4.461733987 podStartE2EDuration="23.484427709s" podCreationTimestamp="2025-10-07 12:43:18 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.962157522 +0000 UTC m=+938.037944360" lastFinishedPulling="2025-10-07 12:43:38.984851244 +0000 UTC m=+957.060638082" observedRunningTime="2025-10-07 12:43:41.480506854 +0000 UTC m=+959.556293682" watchObservedRunningTime="2025-10-07 12:43:41.484427709 +0000 UTC m=+959.560214547" Oct 07 12:43:43 crc kubenswrapper[5024]: I1007 12:43:43.720649 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:43:43 crc kubenswrapper[5024]: I1007 12:43:43.720736 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.425669 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" event={"ID":"c310d938-f1f0-4f85-90c3-f0625fc41848","Type":"ContainerStarted","Data":"5bdd4dd09dea8ab3c139265dfb9fcdb331e23593798f7a02638c22ef611a75f3"} Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.426504 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.430590 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" event={"ID":"7df3da1a-3dc0-400e-a3a6-4878652ecfdc","Type":"ContainerStarted","Data":"2e68c818d0461ed0a04e794eecf2863be8dcbada1f0e7e9374170a90d1c333cc"} Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.431014 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.452586 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" podStartSLOduration=3.377058975 podStartE2EDuration="30.452571165s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:20.157314813 +0000 UTC m=+938.233101651" lastFinishedPulling="2025-10-07 12:43:47.232827003 +0000 UTC m=+965.308613841" observedRunningTime="2025-10-07 12:43:47.448321821 +0000 UTC m=+965.524108659" watchObservedRunningTime="2025-10-07 12:43:47.452571165 +0000 UTC m=+965.528358003" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.466878 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" podStartSLOduration=2.724182016 podStartE2EDuration="30.466858514s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.418285978 +0000 UTC m=+937.494072816" lastFinishedPulling="2025-10-07 12:43:47.160962476 +0000 UTC m=+965.236749314" observedRunningTime="2025-10-07 12:43:47.464358561 +0000 UTC m=+965.540145399" watchObservedRunningTime="2025-10-07 12:43:47.466858514 +0000 UTC m=+965.542645352" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.866976 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-6tzwp" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.879167 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-646554d9b9-dzx9r" Oct 07 12:43:47 crc kubenswrapper[5024]: I1007 12:43:47.954305 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-8r8zb" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.022124 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-zbddw" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.158314 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-649675d675-xscld" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.174906 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-l6h5v" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.220307 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.269288 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-gshfh" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.288993 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-rnxmp" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.325795 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-lwz42" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.389371 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-26sbg" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.401564 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-mgxlm" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.408811 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-rcqrk" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.442121 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" event={"ID":"0a3c98f6-0e02-4493-b3d6-f030d73ca3ac","Type":"ContainerStarted","Data":"3eab1d61a12ad4a8131c4cfbadd4b8036ed622dca40a313f55b69d4a58d8b686"} Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.467888 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" podStartSLOduration=3.875110356 podStartE2EDuration="31.467869249s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.918492112 +0000 UTC m=+937.994278950" lastFinishedPulling="2025-10-07 12:43:47.511251005 +0000 UTC m=+965.587037843" observedRunningTime="2025-10-07 12:43:48.46584849 +0000 UTC m=+966.541635348" watchObservedRunningTime="2025-10-07 12:43:48.467869249 +0000 UTC m=+966.543656087" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.471166 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-dgn5z" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.494161 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.499574 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-tb2fm" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.507564 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-rbwnk" Oct 07 12:43:48 crc kubenswrapper[5024]: I1007 12:43:48.747229 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.457795 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" event={"ID":"be793dd5-2676-4289-961a-9e6c0731b13a","Type":"ContainerStarted","Data":"9341f1d96c27b08ecb55ca753008e7e541a7b03ebb52122236c25bcdd46d9271"} Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.458304 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.460019 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" event={"ID":"f2285083-77e3-448b-b4f0-27adfb683e17","Type":"ContainerStarted","Data":"07aca8489ae7aa0d13ae58c14e985be267e895955b21a4c7d6de15a22dfe1a35"} Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.460165 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.479545 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" podStartSLOduration=2.91880317 podStartE2EDuration="33.47952496s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.628554412 +0000 UTC m=+937.704341250" lastFinishedPulling="2025-10-07 12:43:50.189276202 +0000 UTC m=+968.265063040" observedRunningTime="2025-10-07 12:43:50.474121921 +0000 UTC m=+968.549908769" watchObservedRunningTime="2025-10-07 12:43:50.47952496 +0000 UTC m=+968.555311808" Oct 07 12:43:50 crc kubenswrapper[5024]: I1007 12:43:50.487780 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" podStartSLOduration=2.91745138 podStartE2EDuration="33.487761331s" podCreationTimestamp="2025-10-07 12:43:17 +0000 UTC" firstStartedPulling="2025-10-07 12:43:19.620213408 +0000 UTC m=+937.696000246" lastFinishedPulling="2025-10-07 12:43:50.190523349 +0000 UTC m=+968.266310197" observedRunningTime="2025-10-07 12:43:50.487458422 +0000 UTC m=+968.563245290" watchObservedRunningTime="2025-10-07 12:43:50.487761331 +0000 UTC m=+968.563548169" Oct 07 12:43:58 crc kubenswrapper[5024]: I1007 12:43:58.088219 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-dszfh" Oct 07 12:43:58 crc kubenswrapper[5024]: I1007 12:43:58.095335 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-qnjtl" Oct 07 12:43:58 crc kubenswrapper[5024]: I1007 12:43:58.260916 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-rwftw" Oct 07 12:43:58 crc kubenswrapper[5024]: I1007 12:43:58.760027 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-7fsjz" Oct 07 12:43:59 crc kubenswrapper[5024]: I1007 12:43:59.852377 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb" Oct 07 12:44:13 crc kubenswrapper[5024]: I1007 12:44:13.720155 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:44:13 crc kubenswrapper[5024]: I1007 12:44:13.720747 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.979104 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.980672 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.983119 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.983119 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.983196 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-54g42" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.983840 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 07 12:44:15 crc kubenswrapper[5024]: I1007 12:44:15.989416 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.080605 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.081431 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.081562 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrrg\" (UniqueName: \"kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.081893 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.087727 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.102850 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.183419 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.183510 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrrg\" (UniqueName: \"kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.183578 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.183610 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.183632 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzh2r\" (UniqueName: \"kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.184917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.210660 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrrg\" (UniqueName: \"kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg\") pod \"dnsmasq-dns-675f4bcbfc-kj7lt\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.284593 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.284646 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzh2r\" (UniqueName: \"kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.284692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.285474 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.285542 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.295240 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.309085 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzh2r\" (UniqueName: \"kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r\") pod \"dnsmasq-dns-78dd6ddcc-wf4n8\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.413840 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.745820 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.753113 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 12:44:16 crc kubenswrapper[5024]: I1007 12:44:16.861067 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:16 crc kubenswrapper[5024]: W1007 12:44:16.864959 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5498c13e_f180_472a_8fc7_e1d4ea17b853.slice/crio-045f2c487b7aaf2c11a3871bae9bd852deecec3a7d3d5ead0e0ba4d065a2fde9 WatchSource:0}: Error finding container 045f2c487b7aaf2c11a3871bae9bd852deecec3a7d3d5ead0e0ba4d065a2fde9: Status 404 returned error can't find the container with id 045f2c487b7aaf2c11a3871bae9bd852deecec3a7d3d5ead0e0ba4d065a2fde9 Oct 07 12:44:17 crc kubenswrapper[5024]: I1007 12:44:17.673553 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" event={"ID":"5498c13e-f180-472a-8fc7-e1d4ea17b853","Type":"ContainerStarted","Data":"045f2c487b7aaf2c11a3871bae9bd852deecec3a7d3d5ead0e0ba4d065a2fde9"} Oct 07 12:44:17 crc kubenswrapper[5024]: I1007 12:44:17.677073 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" event={"ID":"0ab71f66-cf38-499b-b384-5519211260d0","Type":"ContainerStarted","Data":"dbae4e5e4ade682043961fbf27bdf2c38630886bb25dd57dd5a364b633c107f4"} Oct 07 12:44:18 crc kubenswrapper[5024]: I1007 12:44:18.955850 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:18 crc kubenswrapper[5024]: I1007 12:44:18.966482 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:18 crc kubenswrapper[5024]: I1007 12:44:18.967981 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:18 crc kubenswrapper[5024]: I1007 12:44:18.995606 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.128052 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlr9v\" (UniqueName: \"kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.128162 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.128185 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.228486 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.229040 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlr9v\" (UniqueName: \"kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.229156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.229188 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.230275 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.230598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.261968 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.262179 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlr9v\" (UniqueName: \"kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v\") pod \"dnsmasq-dns-5ccc8479f9-krs8x\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.263421 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.282178 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.293270 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.434739 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nng6\" (UniqueName: \"kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.435121 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.435204 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.536499 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nng6\" (UniqueName: \"kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.536589 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.536657 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.537540 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.538341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.571030 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nng6\" (UniqueName: \"kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6\") pod \"dnsmasq-dns-57d769cc4f-5lzpd\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.623788 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:19 crc kubenswrapper[5024]: I1007 12:44:19.829923 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.101075 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.118798 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.120000 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.126762 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.126967 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.127083 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cf5sn" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.127201 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.127559 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.127838 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.127999 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.135978 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.246948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247023 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247050 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247093 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247115 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247149 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247564 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkvx\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247606 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247629 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.247740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349614 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349664 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349693 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349709 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349819 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkvx\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349851 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349872 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.349905 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.350265 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.351082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.351835 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.352711 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.354566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.355089 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.355737 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.355888 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.356009 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.366109 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.368670 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkvx\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.373308 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.406798 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.407998 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.413834 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.414755 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.414859 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.415046 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fdntn" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.415080 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.415253 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.415257 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.421275 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.454606 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555492 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555558 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555645 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555682 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555706 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555735 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555777 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjdz\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555854 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555874 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.555896 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657146 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjdz\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657518 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657582 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657605 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657637 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657696 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657768 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.657799 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.659240 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.659591 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.660447 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.660742 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.661204 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.661700 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.664666 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.668044 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.668892 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.673398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.679562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjdz\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.682455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " pod="openstack/rabbitmq-server-0" Oct 07 12:44:20 crc kubenswrapper[5024]: I1007 12:44:20.752103 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.396874 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.398096 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.400586 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.400938 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.401076 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.401255 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jhh4s" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.403206 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.407997 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.413840 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.487841 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-secrets\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488183 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488219 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488243 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488263 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488387 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhzc\" (UniqueName: \"kubernetes.io/projected/7e4863f6-5bdf-407e-ab2c-a161223537cc-kube-api-access-2fhzc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488458 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.488612 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.589908 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-secrets\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.589962 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.589999 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590025 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590051 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590099 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhzc\" (UniqueName: \"kubernetes.io/projected/7e4863f6-5bdf-407e-ab2c-a161223537cc-kube-api-access-2fhzc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590175 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590208 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.590274 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.591502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.591683 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.592107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.592299 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4863f6-5bdf-407e-ab2c-a161223537cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.592449 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.598635 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-secrets\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.607043 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.611118 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4863f6-5bdf-407e-ab2c-a161223537cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.616103 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhzc\" (UniqueName: \"kubernetes.io/projected/7e4863f6-5bdf-407e-ab2c-a161223537cc-kube-api-access-2fhzc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.618156 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"7e4863f6-5bdf-407e-ab2c-a161223537cc\") " pod="openstack/openstack-galera-0" Oct 07 12:44:22 crc kubenswrapper[5024]: I1007 12:44:22.722496 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.616356 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.622777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.627475 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.627874 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.627931 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.629008 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pxbrj" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.632264 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713094 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dlr\" (UniqueName: \"kubernetes.io/projected/5b493394-e353-45b2-b7a9-71b94654e2e7-kube-api-access-j4dlr\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713149 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713173 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713196 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713219 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713236 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713295 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.713309 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.740997 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" event={"ID":"b73b80f9-e3fe-4903-87e1-4ad25a161520","Type":"ContainerStarted","Data":"86fec0457a46c1af0ea1dc53d98f6d7e9b706dcf54ff56c6a0bc3571eb624a29"} Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.743889 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" event={"ID":"dee386d3-259e-4044-93d6-7090236db6d7","Type":"ContainerStarted","Data":"c78a5b3526263df6767ab9de312a60d1a30e20a6db3124cac4f18f63f975b067"} Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815042 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815125 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4dlr\" (UniqueName: \"kubernetes.io/projected/5b493394-e353-45b2-b7a9-71b94654e2e7-kube-api-access-j4dlr\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815234 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815257 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815304 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815321 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815352 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815370 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.815629 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.816362 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.816957 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.817293 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.817587 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b493394-e353-45b2-b7a9-71b94654e2e7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.832333 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.834769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.837897 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b493394-e353-45b2-b7a9-71b94654e2e7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.848169 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4dlr\" (UniqueName: \"kubernetes.io/projected/5b493394-e353-45b2-b7a9-71b94654e2e7-kube-api-access-j4dlr\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.850411 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b493394-e353-45b2-b7a9-71b94654e2e7\") " pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.950689 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.984969 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.986229 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.996307 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.996608 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 07 12:44:23 crc kubenswrapper[5024]: I1007 12:44:23.996829 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8695j" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.004348 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.119465 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-config-data\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.119601 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.119641 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kolla-config\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.119672 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bp6f\" (UniqueName: \"kubernetes.io/projected/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kube-api-access-5bp6f\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.119697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.220849 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bp6f\" (UniqueName: \"kubernetes.io/projected/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kube-api-access-5bp6f\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.220906 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.220938 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-config-data\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.221001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.221025 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kolla-config\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.221859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kolla-config\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.222359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-config-data\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.224599 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.224751 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.239731 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bp6f\" (UniqueName: \"kubernetes.io/projected/acb1b089-7f99-44d4-9e4d-4cb652ee6e21-kube-api-access-5bp6f\") pod \"memcached-0\" (UID: \"acb1b089-7f99-44d4-9e4d-4cb652ee6e21\") " pod="openstack/memcached-0" Oct 07 12:44:24 crc kubenswrapper[5024]: I1007 12:44:24.318104 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.625268 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.626548 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.628749 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zpg2w" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.637290 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.743697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttg5\" (UniqueName: \"kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5\") pod \"kube-state-metrics-0\" (UID: \"f381de8f-0818-456c-9acb-7ee939a6da12\") " pod="openstack/kube-state-metrics-0" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.845528 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttg5\" (UniqueName: \"kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5\") pod \"kube-state-metrics-0\" (UID: \"f381de8f-0818-456c-9acb-7ee939a6da12\") " pod="openstack/kube-state-metrics-0" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.864603 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttg5\" (UniqueName: \"kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5\") pod \"kube-state-metrics-0\" (UID: \"f381de8f-0818-456c-9acb-7ee939a6da12\") " pod="openstack/kube-state-metrics-0" Oct 07 12:44:25 crc kubenswrapper[5024]: I1007 12:44:25.952984 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.409960 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bs9lb"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.411837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.418466 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.418605 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-w6wcp" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.418655 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.429359 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.454936 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-q2ktf"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.456612 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.485696 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q2ktf"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507372 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96g2\" (UniqueName: \"kubernetes.io/projected/91fa5e61-2577-4fad-9b32-395eb0e5105b-kube-api-access-w96g2\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507420 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-log-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507459 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/befa2358-2851-4973-b1e2-6f003e9f1089-scripts\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507475 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507490 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-run\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507510 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-ovn-controller-tls-certs\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507542 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-etc-ovs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507564 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fa5e61-2577-4fad-9b32-395eb0e5105b-scripts\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507593 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507613 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-lib\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-combined-ca-bundle\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507667 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqbs\" (UniqueName: \"kubernetes.io/projected/befa2358-2851-4973-b1e2-6f003e9f1089-kube-api-access-zsqbs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.507691 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-log\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w96g2\" (UniqueName: \"kubernetes.io/projected/91fa5e61-2577-4fad-9b32-395eb0e5105b-kube-api-access-w96g2\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609309 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-log-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/befa2358-2851-4973-b1e2-6f003e9f1089-scripts\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609381 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-run\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609428 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-ovn-controller-tls-certs\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609468 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-etc-ovs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609495 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fa5e61-2577-4fad-9b32-395eb0e5105b-scripts\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609531 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609556 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-lib\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609588 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-combined-ca-bundle\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqbs\" (UniqueName: \"kubernetes.io/projected/befa2358-2851-4973-b1e2-6f003e9f1089-kube-api-access-zsqbs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-log\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609919 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609951 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-etc-ovs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.609963 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-run-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.610017 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-run\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.610037 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-log\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.610398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/befa2358-2851-4973-b1e2-6f003e9f1089-var-lib\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.610797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fa5e61-2577-4fad-9b32-395eb0e5105b-var-log-ovn\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.611982 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fa5e61-2577-4fad-9b32-395eb0e5105b-scripts\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.612052 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/befa2358-2851-4973-b1e2-6f003e9f1089-scripts\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.617809 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-ovn-controller-tls-certs\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.624926 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fa5e61-2577-4fad-9b32-395eb0e5105b-combined-ca-bundle\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.626370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w96g2\" (UniqueName: \"kubernetes.io/projected/91fa5e61-2577-4fad-9b32-395eb0e5105b-kube-api-access-w96g2\") pod \"ovn-controller-bs9lb\" (UID: \"91fa5e61-2577-4fad-9b32-395eb0e5105b\") " pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.630060 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqbs\" (UniqueName: \"kubernetes.io/projected/befa2358-2851-4973-b1e2-6f003e9f1089-kube-api-access-zsqbs\") pod \"ovn-controller-ovs-q2ktf\" (UID: \"befa2358-2851-4973-b1e2-6f003e9f1089\") " pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.733551 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.772493 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.815243 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.816688 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.821089 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.821357 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.821476 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.821609 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.821716 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jzq9h" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.825392 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.914879 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.914930 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.914976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-config\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.915020 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.915054 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.915071 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99v7k\" (UniqueName: \"kubernetes.io/projected/a72fb2e5-9365-4aea-854d-06997dde109c-kube-api-access-99v7k\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.915108 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:29 crc kubenswrapper[5024]: I1007 12:44:29.915126 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-config\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017828 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017870 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017899 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99v7k\" (UniqueName: \"kubernetes.io/projected/a72fb2e5-9365-4aea-854d-06997dde109c-kube-api-access-99v7k\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017945 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.017970 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.018085 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.018127 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.018485 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.021687 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.021812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-config\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.022875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a72fb2e5-9365-4aea-854d-06997dde109c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.027608 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.030295 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.032441 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72fb2e5-9365-4aea-854d-06997dde109c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.050089 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.050563 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99v7k\" (UniqueName: \"kubernetes.io/projected/a72fb2e5-9365-4aea-854d-06997dde109c-kube-api-access-99v7k\") pod \"ovsdbserver-nb-0\" (UID: \"a72fb2e5-9365-4aea-854d-06997dde109c\") " pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:30 crc kubenswrapper[5024]: I1007 12:44:30.144975 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:32 crc kubenswrapper[5024]: I1007 12:44:32.263863 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:44:32 crc kubenswrapper[5024]: I1007 12:44:32.348609 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.886301 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.886925 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzh2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-wf4n8_openstack(5498c13e-f180-472a-8fc7-e1d4ea17b853): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.888405 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" podUID="5498c13e-f180-472a-8fc7-e1d4ea17b853" Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.952128 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.952295 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxrrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kj7lt_openstack(0ab71f66-cf38-499b-b384-5519211260d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:44:32 crc kubenswrapper[5024]: E1007 12:44:32.953675 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" podUID="0ab71f66-cf38-499b-b384-5519211260d0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.347008 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: W1007 12:44:33.381695 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b493394_e353_45b2_b7a9_71b94654e2e7.slice/crio-381152e5584120f3ccabc85c3f42b8b26068f7b51de806f5e8dc099ed13617b8 WatchSource:0}: Error finding container 381152e5584120f3ccabc85c3f42b8b26068f7b51de806f5e8dc099ed13617b8: Status 404 returned error can't find the container with id 381152e5584120f3ccabc85c3f42b8b26068f7b51de806f5e8dc099ed13617b8 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.428468 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: W1007 12:44:33.436010 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb894b3e_4bf4_46b0_8e54_e4a17c02d13f.slice/crio-04a61398d2d0069ea1cc34914d9a59420a02d4659e3d0ae1fb1ccb4da4988b69 WatchSource:0}: Error finding container 04a61398d2d0069ea1cc34914d9a59420a02d4659e3d0ae1fb1ccb4da4988b69: Status 404 returned error can't find the container with id 04a61398d2d0069ea1cc34914d9a59420a02d4659e3d0ae1fb1ccb4da4988b69 Oct 07 12:44:33 crc kubenswrapper[5024]: W1007 12:44:33.437595 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacb1b089_7f99_44d4_9e4d_4cb652ee6e21.slice/crio-4bd3835b38f0e11534ee258cec879836c683a7426df351922ab5cfb0e69d06e7 WatchSource:0}: Error finding container 4bd3835b38f0e11534ee258cec879836c683a7426df351922ab5cfb0e69d06e7: Status 404 returned error can't find the container with id 4bd3835b38f0e11534ee258cec879836c683a7426df351922ab5cfb0e69d06e7 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.437655 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.603863 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.668203 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.672238 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.682517 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.683026 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.683750 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wwrbv" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.685414 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.689796 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.694725 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 07 12:44:33 crc kubenswrapper[5024]: W1007 12:44:33.724098 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda72fb2e5_9365_4aea_854d_06997dde109c.slice/crio-b4d71eb4ba7a68354a7081072f9cd2ee45d0c2c40554ac8c6c4a402d33e1abb5 WatchSource:0}: Error finding container b4d71eb4ba7a68354a7081072f9cd2ee45d0c2c40554ac8c6c4a402d33e1abb5: Status 404 returned error can't find the container with id b4d71eb4ba7a68354a7081072f9cd2ee45d0c2c40554ac8c6c4a402d33e1abb5 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.724534 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb"] Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.764127 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q2ktf"] Oct 07 12:44:33 crc kubenswrapper[5024]: W1007 12:44:33.765930 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefa2358_2851_4973_b1e2_6f003e9f1089.slice/crio-1e1c362737514bf0604d361c028df8eaf192ba1fcbc45db7b5b3123890c39559 WatchSource:0}: Error finding container 1e1c362737514bf0604d361c028df8eaf192ba1fcbc45db7b5b3123890c39559: Status 404 returned error can't find the container with id 1e1c362737514bf0604d361c028df8eaf192ba1fcbc45db7b5b3123890c39559 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784247 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784395 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784454 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfcb\" (UniqueName: \"kubernetes.io/projected/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-kube-api-access-5hfcb\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784484 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784572 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.784825 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.843848 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b493394-e353-45b2-b7a9-71b94654e2e7","Type":"ContainerStarted","Data":"381152e5584120f3ccabc85c3f42b8b26068f7b51de806f5e8dc099ed13617b8"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.846507 5024 generic.go:334] "Generic (PLEG): container finished" podID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerID="49cacb58068990281955269af2886734d6e15c2d301ab6f1140a7fd281d97d60" exitCode=0 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.846633 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" event={"ID":"b73b80f9-e3fe-4903-87e1-4ad25a161520","Type":"ContainerDied","Data":"49cacb58068990281955269af2886734d6e15c2d301ab6f1140a7fd281d97d60"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.851644 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q2ktf" event={"ID":"befa2358-2851-4973-b1e2-6f003e9f1089","Type":"ContainerStarted","Data":"1e1c362737514bf0604d361c028df8eaf192ba1fcbc45db7b5b3123890c39559"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.856698 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f381de8f-0818-456c-9acb-7ee939a6da12","Type":"ContainerStarted","Data":"43e00eb93dd077c7c1f557de0c81413bc74cde2f7e6abc216dd6e9703534a952"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.865251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb" event={"ID":"91fa5e61-2577-4fad-9b32-395eb0e5105b","Type":"ContainerStarted","Data":"aa6b7adbf395b20af12a276982e23bfa7668b7bbdf54eb74b75651ab468ffb9f"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.866980 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerStarted","Data":"05b944984f9a20d2dacb7d00eb2a7f3cc1b4c789e346030b65843277ad199c2f"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.872169 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a72fb2e5-9365-4aea-854d-06997dde109c","Type":"ContainerStarted","Data":"b4d71eb4ba7a68354a7081072f9cd2ee45d0c2c40554ac8c6c4a402d33e1abb5"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.874494 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acb1b089-7f99-44d4-9e4d-4cb652ee6e21","Type":"ContainerStarted","Data":"4bd3835b38f0e11534ee258cec879836c683a7426df351922ab5cfb0e69d06e7"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.876888 5024 generic.go:334] "Generic (PLEG): container finished" podID="dee386d3-259e-4044-93d6-7090236db6d7" containerID="7db8841451472c297e97810fa679138431bc37822aedaaf816e292eb83e29dfc" exitCode=0 Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.876940 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" event={"ID":"dee386d3-259e-4044-93d6-7090236db6d7","Type":"ContainerDied","Data":"7db8841451472c297e97810fa679138431bc37822aedaaf816e292eb83e29dfc"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.879966 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e4863f6-5bdf-407e-ab2c-a161223537cc","Type":"ContainerStarted","Data":"02159d03555f08b361f8ec425496afcd65000130c68726937ba23fbe2273c8ea"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.886918 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerStarted","Data":"04a61398d2d0069ea1cc34914d9a59420a02d4659e3d0ae1fb1ccb4da4988b69"} Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887111 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887180 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfcb\" (UniqueName: \"kubernetes.io/projected/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-kube-api-access-5hfcb\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887259 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.887527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.889379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.890325 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.895365 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.903412 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.904875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.905036 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.905879 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.908010 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.909532 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.910728 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfcb\" (UniqueName: \"kubernetes.io/projected/194f8a33-76cf-48a3-a4fc-0ff4eb701bb5-kube-api-access-5hfcb\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:33 crc kubenswrapper[5024]: I1007 12:44:33.945328 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5\") " pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.027021 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.273516 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.324541 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.419338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzh2r\" (UniqueName: \"kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r\") pod \"5498c13e-f180-472a-8fc7-e1d4ea17b853\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.420673 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc\") pod \"5498c13e-f180-472a-8fc7-e1d4ea17b853\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.420758 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config\") pod \"0ab71f66-cf38-499b-b384-5519211260d0\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.420780 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxrrg\" (UniqueName: \"kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg\") pod \"0ab71f66-cf38-499b-b384-5519211260d0\" (UID: \"0ab71f66-cf38-499b-b384-5519211260d0\") " Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.420890 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config\") pod \"5498c13e-f180-472a-8fc7-e1d4ea17b853\" (UID: \"5498c13e-f180-472a-8fc7-e1d4ea17b853\") " Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.421595 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5498c13e-f180-472a-8fc7-e1d4ea17b853" (UID: "5498c13e-f180-472a-8fc7-e1d4ea17b853"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.421767 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config" (OuterVolumeSpecName: "config") pod "5498c13e-f180-472a-8fc7-e1d4ea17b853" (UID: "5498c13e-f180-472a-8fc7-e1d4ea17b853"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.421998 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config" (OuterVolumeSpecName: "config") pod "0ab71f66-cf38-499b-b384-5519211260d0" (UID: "0ab71f66-cf38-499b-b384-5519211260d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.424743 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r" (OuterVolumeSpecName: "kube-api-access-fzh2r") pod "5498c13e-f180-472a-8fc7-e1d4ea17b853" (UID: "5498c13e-f180-472a-8fc7-e1d4ea17b853"). InnerVolumeSpecName "kube-api-access-fzh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.427854 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg" (OuterVolumeSpecName: "kube-api-access-nxrrg") pod "0ab71f66-cf38-499b-b384-5519211260d0" (UID: "0ab71f66-cf38-499b-b384-5519211260d0"). InnerVolumeSpecName "kube-api-access-nxrrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.523485 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab71f66-cf38-499b-b384-5519211260d0-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.523528 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxrrg\" (UniqueName: \"kubernetes.io/projected/0ab71f66-cf38-499b-b384-5519211260d0-kube-api-access-nxrrg\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.523546 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.523556 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzh2r\" (UniqueName: \"kubernetes.io/projected/5498c13e-f180-472a-8fc7-e1d4ea17b853-kube-api-access-fzh2r\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.523567 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5498c13e-f180-472a-8fc7-e1d4ea17b853-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.682469 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.725667 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-n2jhs"] Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.727787 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.737368 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.802321 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n2jhs"] Oct 07 12:44:34 crc kubenswrapper[5024]: W1007 12:44:34.828903 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod194f8a33_76cf_48a3_a4fc_0ff4eb701bb5.slice/crio-e6157da4f29e113c4896b502fdfc04e44cc757007fb15ddbbe689f1cdeba1f76 WatchSource:0}: Error finding container e6157da4f29e113c4896b502fdfc04e44cc757007fb15ddbbe689f1cdeba1f76: Status 404 returned error can't find the container with id e6157da4f29e113c4896b502fdfc04e44cc757007fb15ddbbe689f1cdeba1f76 Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.829459 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovs-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.829877 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovn-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.830330 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88bcf\" (UniqueName: \"kubernetes.io/projected/903dfb65-4299-4443-a62d-216c7e5a2953-kube-api-access-88bcf\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.830789 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.831334 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-combined-ca-bundle\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.831540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903dfb65-4299-4443-a62d-216c7e5a2953-config\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.907212 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.923422 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.961886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovs-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.971374 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovn-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.971526 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88bcf\" (UniqueName: \"kubernetes.io/projected/903dfb65-4299-4443-a62d-216c7e5a2953-kube-api-access-88bcf\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.971638 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.971754 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-combined-ca-bundle\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.971889 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903dfb65-4299-4443-a62d-216c7e5a2953-config\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.965588 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovs-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.974099 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/903dfb65-4299-4443-a62d-216c7e5a2953-ovn-rundir\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.968396 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.974885 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903dfb65-4299-4443-a62d-216c7e5a2953-config\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.967741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" event={"ID":"b73b80f9-e3fe-4903-87e1-4ad25a161520","Type":"ContainerStarted","Data":"b14e095c3874b47a904bb6ec405edd43d012f77f532705e1ad1a3329b7e37a6c"} Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.978509 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.978792 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.982185 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.989284 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" event={"ID":"5498c13e-f180-472a-8fc7-e1d4ea17b853","Type":"ContainerDied","Data":"045f2c487b7aaf2c11a3871bae9bd852deecec3a7d3d5ead0e0ba4d065a2fde9"} Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.989480 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wf4n8" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.990577 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 07 12:44:34 crc kubenswrapper[5024]: I1007 12:44:34.997816 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88bcf\" (UniqueName: \"kubernetes.io/projected/903dfb65-4299-4443-a62d-216c7e5a2953-kube-api-access-88bcf\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.005628 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" event={"ID":"0ab71f66-cf38-499b-b384-5519211260d0","Type":"ContainerDied","Data":"dbae4e5e4ade682043961fbf27bdf2c38630886bb25dd57dd5a364b633c107f4"} Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.005783 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kj7lt" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.009093 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5","Type":"ContainerStarted","Data":"e6157da4f29e113c4896b502fdfc04e44cc757007fb15ddbbe689f1cdeba1f76"} Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.047850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903dfb65-4299-4443-a62d-216c7e5a2953-combined-ca-bundle\") pod \"ovn-controller-metrics-n2jhs\" (UID: \"903dfb65-4299-4443-a62d-216c7e5a2953\") " pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.057285 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" podStartSLOduration=7.243468712 podStartE2EDuration="17.057266911s" podCreationTimestamp="2025-10-07 12:44:18 +0000 UTC" firstStartedPulling="2025-10-07 12:44:23.343927389 +0000 UTC m=+1001.419714227" lastFinishedPulling="2025-10-07 12:44:33.157725588 +0000 UTC m=+1011.233512426" observedRunningTime="2025-10-07 12:44:34.990978957 +0000 UTC m=+1013.066765805" watchObservedRunningTime="2025-10-07 12:44:35.057266911 +0000 UTC m=+1013.133053749" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.057615 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" event={"ID":"dee386d3-259e-4044-93d6-7090236db6d7","Type":"ContainerStarted","Data":"059b45016c5461ece6fb757633a1fe85baa6833df1832154b5da070db99eb94d"} Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.063730 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.069169 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2jhs" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.073868 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnh98\" (UniqueName: \"kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.074014 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.074205 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.074596 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.083452 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" podStartSLOduration=6.134031772 podStartE2EDuration="16.083432548s" podCreationTimestamp="2025-10-07 12:44:19 +0000 UTC" firstStartedPulling="2025-10-07 12:44:23.328662592 +0000 UTC m=+1001.404449430" lastFinishedPulling="2025-10-07 12:44:33.278063368 +0000 UTC m=+1011.353850206" observedRunningTime="2025-10-07 12:44:35.075528027 +0000 UTC m=+1013.151314865" watchObservedRunningTime="2025-10-07 12:44:35.083432548 +0000 UTC m=+1013.159219386" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.171615 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.176772 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.176861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.176923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.176963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnh98\" (UniqueName: \"kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.182770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.185913 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.196028 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.196200 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kj7lt"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.206104 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnh98\" (UniqueName: \"kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98\") pod \"dnsmasq-dns-7fd796d7df-cm2bw\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.232262 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.243538 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.251069 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wf4n8"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.256391 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.258303 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.262183 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.271856 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.384454 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.385236 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.385294 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52xbc\" (UniqueName: \"kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.385463 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.385614 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.400900 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.488539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.488601 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.488651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52xbc\" (UniqueName: \"kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.488686 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.488735 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.489760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.489922 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.490117 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.490328 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.507682 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52xbc\" (UniqueName: \"kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc\") pod \"dnsmasq-dns-86db49b7ff-szxq8\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.576026 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.674996 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n2jhs"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.884541 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:35 crc kubenswrapper[5024]: I1007 12:44:35.954286 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:44:36 crc kubenswrapper[5024]: I1007 12:44:36.081193 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="dnsmasq-dns" containerID="cri-o://b14e095c3874b47a904bb6ec405edd43d012f77f532705e1ad1a3329b7e37a6c" gracePeriod=10 Oct 07 12:44:36 crc kubenswrapper[5024]: I1007 12:44:36.770102 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab71f66-cf38-499b-b384-5519211260d0" path="/var/lib/kubelet/pods/0ab71f66-cf38-499b-b384-5519211260d0/volumes" Oct 07 12:44:36 crc kubenswrapper[5024]: I1007 12:44:36.770649 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5498c13e-f180-472a-8fc7-e1d4ea17b853" path="/var/lib/kubelet/pods/5498c13e-f180-472a-8fc7-e1d4ea17b853/volumes" Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.090107 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2jhs" event={"ID":"903dfb65-4299-4443-a62d-216c7e5a2953","Type":"ContainerStarted","Data":"fcc3832a44dfb03ca22a1f8d251b7a7692db9ff2bee8a1c8ab4a74b984509522"} Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.092807 5024 generic.go:334] "Generic (PLEG): container finished" podID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerID="b14e095c3874b47a904bb6ec405edd43d012f77f532705e1ad1a3329b7e37a6c" exitCode=0 Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.092886 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" event={"ID":"b73b80f9-e3fe-4903-87e1-4ad25a161520","Type":"ContainerDied","Data":"b14e095c3874b47a904bb6ec405edd43d012f77f532705e1ad1a3329b7e37a6c"} Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.094195 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" event={"ID":"347553c3-a0ad-42c8-9923-5b17ba77a7a6","Type":"ContainerStarted","Data":"01f3c6e0c86d1abc957a3533d282882a55469b38177e629d80cb766e1f79f9b7"} Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.095114 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" event={"ID":"410dd573-7d12-4164-a27b-9e016a8a4fbf","Type":"ContainerStarted","Data":"db96a71019f69a5eaefceb38dc8815bf3281cb69b13a6dee00967dae7f438088"} Oct 07 12:44:37 crc kubenswrapper[5024]: I1007 12:44:37.095235 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="dnsmasq-dns" containerID="cri-o://059b45016c5461ece6fb757633a1fe85baa6833df1832154b5da070db99eb94d" gracePeriod=10 Oct 07 12:44:38 crc kubenswrapper[5024]: I1007 12:44:38.103926 5024 generic.go:334] "Generic (PLEG): container finished" podID="dee386d3-259e-4044-93d6-7090236db6d7" containerID="059b45016c5461ece6fb757633a1fe85baa6833df1832154b5da070db99eb94d" exitCode=0 Oct 07 12:44:38 crc kubenswrapper[5024]: I1007 12:44:38.103976 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" event={"ID":"dee386d3-259e-4044-93d6-7090236db6d7","Type":"ContainerDied","Data":"059b45016c5461ece6fb757633a1fe85baa6833df1832154b5da070db99eb94d"} Oct 07 12:44:39 crc kubenswrapper[5024]: I1007 12:44:39.627339 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.414913 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.538165 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc\") pod \"b73b80f9-e3fe-4903-87e1-4ad25a161520\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.538242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config\") pod \"b73b80f9-e3fe-4903-87e1-4ad25a161520\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.538425 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlr9v\" (UniqueName: \"kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v\") pod \"b73b80f9-e3fe-4903-87e1-4ad25a161520\" (UID: \"b73b80f9-e3fe-4903-87e1-4ad25a161520\") " Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.545986 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v" (OuterVolumeSpecName: "kube-api-access-dlr9v") pod "b73b80f9-e3fe-4903-87e1-4ad25a161520" (UID: "b73b80f9-e3fe-4903-87e1-4ad25a161520"). InnerVolumeSpecName "kube-api-access-dlr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.585430 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config" (OuterVolumeSpecName: "config") pod "b73b80f9-e3fe-4903-87e1-4ad25a161520" (UID: "b73b80f9-e3fe-4903-87e1-4ad25a161520"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.585728 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b73b80f9-e3fe-4903-87e1-4ad25a161520" (UID: "b73b80f9-e3fe-4903-87e1-4ad25a161520"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.640566 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlr9v\" (UniqueName: \"kubernetes.io/projected/b73b80f9-e3fe-4903-87e1-4ad25a161520-kube-api-access-dlr9v\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.640597 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.640606 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73b80f9-e3fe-4903-87e1-4ad25a161520-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:41 crc kubenswrapper[5024]: I1007 12:44:41.929412 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.049586 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config\") pod \"dee386d3-259e-4044-93d6-7090236db6d7\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.049644 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nng6\" (UniqueName: \"kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6\") pod \"dee386d3-259e-4044-93d6-7090236db6d7\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.049736 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc\") pod \"dee386d3-259e-4044-93d6-7090236db6d7\" (UID: \"dee386d3-259e-4044-93d6-7090236db6d7\") " Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.052693 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6" (OuterVolumeSpecName: "kube-api-access-9nng6") pod "dee386d3-259e-4044-93d6-7090236db6d7" (UID: "dee386d3-259e-4044-93d6-7090236db6d7"). InnerVolumeSpecName "kube-api-access-9nng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.086830 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dee386d3-259e-4044-93d6-7090236db6d7" (UID: "dee386d3-259e-4044-93d6-7090236db6d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.094003 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config" (OuterVolumeSpecName: "config") pod "dee386d3-259e-4044-93d6-7090236db6d7" (UID: "dee386d3-259e-4044-93d6-7090236db6d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.136337 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" event={"ID":"b73b80f9-e3fe-4903-87e1-4ad25a161520","Type":"ContainerDied","Data":"86fec0457a46c1af0ea1dc53d98f6d7e9b706dcf54ff56c6a0bc3571eb624a29"} Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.136708 5024 scope.go:117] "RemoveContainer" containerID="b14e095c3874b47a904bb6ec405edd43d012f77f532705e1ad1a3329b7e37a6c" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.136446 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.141114 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" event={"ID":"dee386d3-259e-4044-93d6-7090236db6d7","Type":"ContainerDied","Data":"c78a5b3526263df6767ab9de312a60d1a30e20a6db3124cac4f18f63f975b067"} Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.141214 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5lzpd" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.151212 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.151240 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nng6\" (UniqueName: \"kubernetes.io/projected/dee386d3-259e-4044-93d6-7090236db6d7-kube-api-access-9nng6\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.151250 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dee386d3-259e-4044-93d6-7090236db6d7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.185219 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.191334 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5lzpd"] Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.198522 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.203127 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-krs8x"] Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.776323 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" path="/var/lib/kubelet/pods/b73b80f9-e3fe-4903-87e1-4ad25a161520/volumes" Oct 07 12:44:42 crc kubenswrapper[5024]: I1007 12:44:42.777528 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee386d3-259e-4044-93d6-7090236db6d7" path="/var/lib/kubelet/pods/dee386d3-259e-4044-93d6-7090236db6d7/volumes" Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.149547 5024 generic.go:334] "Generic (PLEG): container finished" podID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerID="9617661150e5c0f6db9c6baed91914dbe52391c5b1fa80c0da2538fe2c9e169f" exitCode=0 Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.149667 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" event={"ID":"347553c3-a0ad-42c8-9923-5b17ba77a7a6","Type":"ContainerDied","Data":"9617661150e5c0f6db9c6baed91914dbe52391c5b1fa80c0da2538fe2c9e169f"} Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.154008 5024 generic.go:334] "Generic (PLEG): container finished" podID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerID="b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b" exitCode=0 Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.154037 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" event={"ID":"410dd573-7d12-4164-a27b-9e016a8a4fbf","Type":"ContainerDied","Data":"b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b"} Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.556981 5024 scope.go:117] "RemoveContainer" containerID="49cacb58068990281955269af2886734d6e15c2d301ab6f1140a7fd281d97d60" Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.720430 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.720487 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.720528 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.721071 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:44:43 crc kubenswrapper[5024]: I1007 12:44:43.721120 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0" gracePeriod=600 Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.045072 5024 scope.go:117] "RemoveContainer" containerID="059b45016c5461ece6fb757633a1fe85baa6833df1832154b5da070db99eb94d" Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.165259 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0" exitCode=0 Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.165302 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0"} Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.294762 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-krs8x" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.96:5353: i/o timeout" Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.454960 5024 scope.go:117] "RemoveContainer" containerID="7db8841451472c297e97810fa679138431bc37822aedaaf816e292eb83e29dfc" Oct 07 12:44:44 crc kubenswrapper[5024]: I1007 12:44:44.553294 5024 scope.go:117] "RemoveContainer" containerID="ae467e69e08193e325a69f8bb005bb8c341ea340f7486140e99337d87e5c99d6" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.174500 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a72fb2e5-9365-4aea-854d-06997dde109c","Type":"ContainerStarted","Data":"6edc699a5cd8cf47a77e51d61361e2e5ef990b2a8c8f1fb22de76322366564dd"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.176027 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5","Type":"ContainerStarted","Data":"54b7237d1f357fa81c6db729670ea2554f8d6621e667f82341fff524f5d57307"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.178232 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b493394-e353-45b2-b7a9-71b94654e2e7","Type":"ContainerStarted","Data":"805dca33bf0ccc5f8522d5b42515598b2f3ccf64027bd47fe8f70c2a4de2d500"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.180919 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.182412 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e4863f6-5bdf-407e-ab2c-a161223537cc","Type":"ContainerStarted","Data":"e2237c1ca8613b67ba753255828f81c79167784f8cf4012d5e2157fffb542948"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.184738 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2jhs" event={"ID":"903dfb65-4299-4443-a62d-216c7e5a2953","Type":"ContainerStarted","Data":"15815caba430da557f261b4c7322cdfec44704d3aa69a63cc2bdae80f5a4047e"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.186150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q2ktf" event={"ID":"befa2358-2851-4973-b1e2-6f003e9f1089","Type":"ContainerStarted","Data":"fa59c7cd11112b2fa67622b1ddf62da5f3262359a46677346c4134f513f66ee5"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.187559 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f381de8f-0818-456c-9acb-7ee939a6da12","Type":"ContainerStarted","Data":"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.187706 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.190505 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" event={"ID":"410dd573-7d12-4164-a27b-9e016a8a4fbf","Type":"ContainerStarted","Data":"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.190595 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.192291 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acb1b089-7f99-44d4-9e4d-4cb652ee6e21","Type":"ContainerStarted","Data":"b4f07ab0f4e9e91d097155f3d3fe30e9a3c61d227b51bef2d00822fff1a22460"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.195532 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb" event={"ID":"91fa5e61-2577-4fad-9b32-395eb0e5105b","Type":"ContainerStarted","Data":"30e33bd0093d0a6d9238955a619e72291c3a1ec5368cac7370a6136aeee94c1c"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.199670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" event={"ID":"347553c3-a0ad-42c8-9923-5b17ba77a7a6","Type":"ContainerStarted","Data":"80ed6b1ba1f0ec6c4b86244276aeb558d15816b3bfc58b2774d8a5710719b4b3"} Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.203653 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.243411 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" podStartSLOduration=11.243351758 podStartE2EDuration="11.243351758s" podCreationTimestamp="2025-10-07 12:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:44:45.239781633 +0000 UTC m=+1023.315568491" watchObservedRunningTime="2025-10-07 12:44:45.243351758 +0000 UTC m=+1023.319138596" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.279827 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" podStartSLOduration=10.279805657 podStartE2EDuration="10.279805657s" podCreationTimestamp="2025-10-07 12:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:44:45.269457914 +0000 UTC m=+1023.345244782" watchObservedRunningTime="2025-10-07 12:44:45.279805657 +0000 UTC m=+1023.355592495" Oct 07 12:44:45 crc kubenswrapper[5024]: I1007 12:44:45.289128 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.490988512 podStartE2EDuration="20.28911015s" podCreationTimestamp="2025-10-07 12:44:25 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.717248448 +0000 UTC m=+1011.793035286" lastFinishedPulling="2025-10-07 12:44:44.515370086 +0000 UTC m=+1022.591156924" observedRunningTime="2025-10-07 12:44:45.28434669 +0000 UTC m=+1023.360133528" watchObservedRunningTime="2025-10-07 12:44:45.28911015 +0000 UTC m=+1023.364896988" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.212511 5024 generic.go:334] "Generic (PLEG): container finished" podID="befa2358-2851-4973-b1e2-6f003e9f1089" containerID="fa59c7cd11112b2fa67622b1ddf62da5f3262359a46677346c4134f513f66ee5" exitCode=0 Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.212649 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q2ktf" event={"ID":"befa2358-2851-4973-b1e2-6f003e9f1089","Type":"ContainerDied","Data":"fa59c7cd11112b2fa67622b1ddf62da5f3262359a46677346c4134f513f66ee5"} Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.216981 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerStarted","Data":"6a80969d05de5dc169894d66537946fa6ea76bfcb60e67f286b3ce277588a810"} Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.219407 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a72fb2e5-9365-4aea-854d-06997dde109c","Type":"ContainerStarted","Data":"e230f8efde4adf80a4fda3f6bd94569e83839a9e98a94313791dd136a025dd21"} Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.224269 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"194f8a33-76cf-48a3-a4fc-0ff4eb701bb5","Type":"ContainerStarted","Data":"005f4d4c20f8651d6611abe89abae523666d289a44a8108e377d1a9e40793aab"} Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.228051 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerStarted","Data":"69ff3a815cfcebe3b00f8e6f065090d3b8dd1b06444974727f4605fb76e990d5"} Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.229297 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bs9lb" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.293880 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.697519119 podStartE2EDuration="18.293858219s" podCreationTimestamp="2025-10-07 12:44:28 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.726653274 +0000 UTC m=+1011.802440112" lastFinishedPulling="2025-10-07 12:44:42.322992354 +0000 UTC m=+1020.398779212" observedRunningTime="2025-10-07 12:44:46.289162962 +0000 UTC m=+1024.364949810" watchObservedRunningTime="2025-10-07 12:44:46.293858219 +0000 UTC m=+1024.369645057" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.344794 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.855028519 podStartE2EDuration="23.344773563s" podCreationTimestamp="2025-10-07 12:44:23 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.441808469 +0000 UTC m=+1011.517595307" lastFinishedPulling="2025-10-07 12:44:41.931553513 +0000 UTC m=+1020.007340351" observedRunningTime="2025-10-07 12:44:46.335618184 +0000 UTC m=+1024.411405032" watchObservedRunningTime="2025-10-07 12:44:46.344773563 +0000 UTC m=+1024.420560421" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.406099 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.929194944 podStartE2EDuration="14.406075031s" podCreationTimestamp="2025-10-07 12:44:32 +0000 UTC" firstStartedPulling="2025-10-07 12:44:34.842520352 +0000 UTC m=+1012.918307190" lastFinishedPulling="2025-10-07 12:44:42.319400399 +0000 UTC m=+1020.395187277" observedRunningTime="2025-10-07 12:44:46.385377884 +0000 UTC m=+1024.461164732" watchObservedRunningTime="2025-10-07 12:44:46.406075031 +0000 UTC m=+1024.481861879" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.407942 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-n2jhs" podStartSLOduration=5.068317415 podStartE2EDuration="12.407934465s" podCreationTimestamp="2025-10-07 12:44:34 +0000 UTC" firstStartedPulling="2025-10-07 12:44:37.011592571 +0000 UTC m=+1015.087379409" lastFinishedPulling="2025-10-07 12:44:44.351209591 +0000 UTC m=+1022.426996459" observedRunningTime="2025-10-07 12:44:46.403955669 +0000 UTC m=+1024.479742517" watchObservedRunningTime="2025-10-07 12:44:46.407934465 +0000 UTC m=+1024.483721313" Oct 07 12:44:46 crc kubenswrapper[5024]: I1007 12:44:46.464368 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bs9lb" podStartSLOduration=8.742785676 podStartE2EDuration="17.46434766s" podCreationTimestamp="2025-10-07 12:44:29 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.737722388 +0000 UTC m=+1011.813509226" lastFinishedPulling="2025-10-07 12:44:42.459284372 +0000 UTC m=+1020.535071210" observedRunningTime="2025-10-07 12:44:46.463401252 +0000 UTC m=+1024.539188130" watchObservedRunningTime="2025-10-07 12:44:46.46434766 +0000 UTC m=+1024.540134498" Oct 07 12:44:47 crc kubenswrapper[5024]: I1007 12:44:47.235413 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q2ktf" event={"ID":"befa2358-2851-4973-b1e2-6f003e9f1089","Type":"ContainerStarted","Data":"20dda4c2634d92a389a062dc8647d052853865a4cb6afd863618411c402da224"} Oct 07 12:44:47 crc kubenswrapper[5024]: I1007 12:44:47.235788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q2ktf" event={"ID":"befa2358-2851-4973-b1e2-6f003e9f1089","Type":"ContainerStarted","Data":"3bd11cca0bb0e62ffe0a7c27c45306b9fe4b41970dc58a0095360b782c8b28c8"} Oct 07 12:44:47 crc kubenswrapper[5024]: I1007 12:44:47.257718 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-q2ktf" podStartSLOduration=10.104391563 podStartE2EDuration="18.257698039s" podCreationTimestamp="2025-10-07 12:44:29 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.768921904 +0000 UTC m=+1011.844708742" lastFinishedPulling="2025-10-07 12:44:41.92222838 +0000 UTC m=+1019.998015218" observedRunningTime="2025-10-07 12:44:47.252385973 +0000 UTC m=+1025.328172811" watchObservedRunningTime="2025-10-07 12:44:47.257698039 +0000 UTC m=+1025.333484877" Oct 07 12:44:48 crc kubenswrapper[5024]: I1007 12:44:48.145783 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:48 crc kubenswrapper[5024]: I1007 12:44:48.214001 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:48 crc kubenswrapper[5024]: I1007 12:44:48.244346 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:48 crc kubenswrapper[5024]: I1007 12:44:48.244398 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:44:48 crc kubenswrapper[5024]: I1007 12:44:48.244429 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.027742 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.028000 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.074580 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.252854 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b493394-e353-45b2-b7a9-71b94654e2e7" containerID="805dca33bf0ccc5f8522d5b42515598b2f3ccf64027bd47fe8f70c2a4de2d500" exitCode=0 Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.252942 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b493394-e353-45b2-b7a9-71b94654e2e7","Type":"ContainerDied","Data":"805dca33bf0ccc5f8522d5b42515598b2f3ccf64027bd47fe8f70c2a4de2d500"} Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.255946 5024 generic.go:334] "Generic (PLEG): container finished" podID="7e4863f6-5bdf-407e-ab2c-a161223537cc" containerID="e2237c1ca8613b67ba753255828f81c79167784f8cf4012d5e2157fffb542948" exitCode=0 Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.255984 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e4863f6-5bdf-407e-ab2c-a161223537cc","Type":"ContainerDied","Data":"e2237c1ca8613b67ba753255828f81c79167784f8cf4012d5e2157fffb542948"} Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.316296 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.319423 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.326671 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.887997 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 07 12:44:49 crc kubenswrapper[5024]: E1007 12:44:49.888735 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.888757 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: E1007 12:44:49.888773 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="init" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.888781 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="init" Oct 07 12:44:49 crc kubenswrapper[5024]: E1007 12:44:49.888796 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="init" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.888804 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="init" Oct 07 12:44:49 crc kubenswrapper[5024]: E1007 12:44:49.888831 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.888839 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.889042 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee386d3-259e-4044-93d6-7090236db6d7" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.889069 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73b80f9-e3fe-4903-87e1-4ad25a161520" containerName="dnsmasq-dns" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.891922 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.898358 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5tn8c" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.898517 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.898630 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.898764 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.904012 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.989849 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.989924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.989989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-config\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.990033 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-scripts\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.990061 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8p47\" (UniqueName: \"kubernetes.io/projected/edcae764-2b6e-4119-b44c-64ddab7e5309-kube-api-access-q8p47\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.990099 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:49 crc kubenswrapper[5024]: I1007 12:44:49.990208 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091381 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-config\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091446 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-scripts\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091471 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8p47\" (UniqueName: \"kubernetes.io/projected/edcae764-2b6e-4119-b44c-64ddab7e5309-kube-api-access-q8p47\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091541 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.091844 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.092575 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.093003 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-config\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.093918 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcae764-2b6e-4119-b44c-64ddab7e5309-scripts\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.098012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.098379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.110496 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcae764-2b6e-4119-b44c-64ddab7e5309-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.114396 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8p47\" (UniqueName: \"kubernetes.io/projected/edcae764-2b6e-4119-b44c-64ddab7e5309-kube-api-access-q8p47\") pod \"ovn-northd-0\" (UID: \"edcae764-2b6e-4119-b44c-64ddab7e5309\") " pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.218723 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.267231 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"7e4863f6-5bdf-407e-ab2c-a161223537cc","Type":"ContainerStarted","Data":"fd5a6dc5d83b821a0f9e462476637ce07f799de7f1af84777151801e805c0ccb"} Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.270715 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b493394-e353-45b2-b7a9-71b94654e2e7","Type":"ContainerStarted","Data":"57d04e1b6f3ac2508b292cab23c9ec5802668a0244008cb0a504e3db956a6656"} Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.293866 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.974642437 podStartE2EDuration="29.293849369s" podCreationTimestamp="2025-10-07 12:44:21 +0000 UTC" firstStartedPulling="2025-10-07 12:44:32.87002649 +0000 UTC m=+1010.945813328" lastFinishedPulling="2025-10-07 12:44:42.189233422 +0000 UTC m=+1020.265020260" observedRunningTime="2025-10-07 12:44:50.285202775 +0000 UTC m=+1028.360989613" watchObservedRunningTime="2025-10-07 12:44:50.293849369 +0000 UTC m=+1028.369636197" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.319429 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.248342245 podStartE2EDuration="28.319407279s" podCreationTimestamp="2025-10-07 12:44:22 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.388610569 +0000 UTC m=+1011.464397407" lastFinishedPulling="2025-10-07 12:44:42.459675603 +0000 UTC m=+1020.535462441" observedRunningTime="2025-10-07 12:44:50.309634812 +0000 UTC m=+1028.385421650" watchObservedRunningTime="2025-10-07 12:44:50.319407279 +0000 UTC m=+1028.395194117" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.402429 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.578287 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.627545 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:50 crc kubenswrapper[5024]: I1007 12:44:50.708580 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 07 12:44:51 crc kubenswrapper[5024]: I1007 12:44:51.280570 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcae764-2b6e-4119-b44c-64ddab7e5309","Type":"ContainerStarted","Data":"601337c8add9cd234e5d3ae2b2b2bf2d528523be127d73bc17210918dc292a10"} Oct 07 12:44:51 crc kubenswrapper[5024]: I1007 12:44:51.280830 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="dnsmasq-dns" containerID="cri-o://67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b" gracePeriod=10 Oct 07 12:44:51 crc kubenswrapper[5024]: I1007 12:44:51.983392 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.124740 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc\") pod \"410dd573-7d12-4164-a27b-9e016a8a4fbf\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.124784 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnh98\" (UniqueName: \"kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98\") pod \"410dd573-7d12-4164-a27b-9e016a8a4fbf\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.124895 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config\") pod \"410dd573-7d12-4164-a27b-9e016a8a4fbf\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.125003 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb\") pod \"410dd573-7d12-4164-a27b-9e016a8a4fbf\" (UID: \"410dd573-7d12-4164-a27b-9e016a8a4fbf\") " Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.129921 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98" (OuterVolumeSpecName: "kube-api-access-xnh98") pod "410dd573-7d12-4164-a27b-9e016a8a4fbf" (UID: "410dd573-7d12-4164-a27b-9e016a8a4fbf"). InnerVolumeSpecName "kube-api-access-xnh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.168817 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "410dd573-7d12-4164-a27b-9e016a8a4fbf" (UID: "410dd573-7d12-4164-a27b-9e016a8a4fbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.181852 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config" (OuterVolumeSpecName: "config") pod "410dd573-7d12-4164-a27b-9e016a8a4fbf" (UID: "410dd573-7d12-4164-a27b-9e016a8a4fbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.182657 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "410dd573-7d12-4164-a27b-9e016a8a4fbf" (UID: "410dd573-7d12-4164-a27b-9e016a8a4fbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.226972 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.227023 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnh98\" (UniqueName: \"kubernetes.io/projected/410dd573-7d12-4164-a27b-9e016a8a4fbf-kube-api-access-xnh98\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.227041 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.227053 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/410dd573-7d12-4164-a27b-9e016a8a4fbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.290896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcae764-2b6e-4119-b44c-64ddab7e5309","Type":"ContainerStarted","Data":"313b7f07f5f2402ae980bca16309c4550e171217695fac35d632187a0d7cc0f9"} Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.290961 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcae764-2b6e-4119-b44c-64ddab7e5309","Type":"ContainerStarted","Data":"ac2876a5c740fd623159317676df71f36c274dffcd36804fd9599c5027835514"} Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.291094 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.293030 5024 generic.go:334] "Generic (PLEG): container finished" podID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerID="67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b" exitCode=0 Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.293097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" event={"ID":"410dd573-7d12-4164-a27b-9e016a8a4fbf","Type":"ContainerDied","Data":"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b"} Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.293153 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.293176 5024 scope.go:117] "RemoveContainer" containerID="67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.293161 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cm2bw" event={"ID":"410dd573-7d12-4164-a27b-9e016a8a4fbf","Type":"ContainerDied","Data":"db96a71019f69a5eaefceb38dc8815bf3281cb69b13a6dee00967dae7f438088"} Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.314321 5024 scope.go:117] "RemoveContainer" containerID="b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.338921 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.353437537 podStartE2EDuration="3.338776386s" podCreationTimestamp="2025-10-07 12:44:49 +0000 UTC" firstStartedPulling="2025-10-07 12:44:50.710322214 +0000 UTC m=+1028.786109052" lastFinishedPulling="2025-10-07 12:44:51.695661063 +0000 UTC m=+1029.771447901" observedRunningTime="2025-10-07 12:44:52.316468302 +0000 UTC m=+1030.392255140" watchObservedRunningTime="2025-10-07 12:44:52.338776386 +0000 UTC m=+1030.414563224" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.359714 5024 scope.go:117] "RemoveContainer" containerID="67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b" Oct 07 12:44:52 crc kubenswrapper[5024]: E1007 12:44:52.360361 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b\": container with ID starting with 67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b not found: ID does not exist" containerID="67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.360446 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b"} err="failed to get container status \"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b\": rpc error: code = NotFound desc = could not find container \"67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b\": container with ID starting with 67636132fd3f08251be2a1d9b1d83dd06e89138311066708b40f3928fa88f72b not found: ID does not exist" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.360474 5024 scope.go:117] "RemoveContainer" containerID="b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b" Oct 07 12:44:52 crc kubenswrapper[5024]: E1007 12:44:52.360786 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b\": container with ID starting with b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b not found: ID does not exist" containerID="b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.360822 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b"} err="failed to get container status \"b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b\": rpc error: code = NotFound desc = could not find container \"b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b\": container with ID starting with b3ad6ad302bc4289851b02e93106b687f669dc53cf373bd6ce261bb54056224b not found: ID does not exist" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.366424 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.373060 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cm2bw"] Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.723169 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.723226 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 07 12:44:52 crc kubenswrapper[5024]: I1007 12:44:52.760835 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" path="/var/lib/kubelet/pods/410dd573-7d12-4164-a27b-9e016a8a4fbf/volumes" Oct 07 12:44:53 crc kubenswrapper[5024]: I1007 12:44:53.951512 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:53 crc kubenswrapper[5024]: I1007 12:44:53.951894 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 07 12:44:54 crc kubenswrapper[5024]: I1007 12:44:54.319370 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 07 12:44:55 crc kubenswrapper[5024]: I1007 12:44:55.959200 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 07 12:44:59 crc kubenswrapper[5024]: I1007 12:44:59.269718 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 07 12:44:59 crc kubenswrapper[5024]: I1007 12:44:59.333024 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="7e4863f6-5bdf-407e-ab2c-a161223537cc" containerName="galera" probeResult="failure" output=< Oct 07 12:44:59 crc kubenswrapper[5024]: wsrep_local_state_comment (Joined) differs from Synced Oct 07 12:44:59 crc kubenswrapper[5024]: > Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.160180 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x"] Oct 07 12:45:00 crc kubenswrapper[5024]: E1007 12:45:00.167867 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="dnsmasq-dns" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.168158 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="dnsmasq-dns" Oct 07 12:45:00 crc kubenswrapper[5024]: E1007 12:45:00.168262 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="init" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.168340 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="init" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.168632 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="410dd573-7d12-4164-a27b-9e016a8a4fbf" containerName="dnsmasq-dns" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.169489 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.172452 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.175965 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.178274 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x"] Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.295862 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pw7p\" (UniqueName: \"kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.296392 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.296441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.398070 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pw7p\" (UniqueName: \"kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.398225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.398266 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.399246 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.410697 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.415893 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pw7p\" (UniqueName: \"kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p\") pod \"collect-profiles-29330685-rh22x\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.502838 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:00 crc kubenswrapper[5024]: I1007 12:45:00.948393 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x"] Oct 07 12:45:01 crc kubenswrapper[5024]: I1007 12:45:01.366038 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" event={"ID":"51fa621d-3db6-4bda-b380-1c8972b3005b","Type":"ContainerStarted","Data":"6a11951ea5219702334649faecc0783ad19d417d59a8fa3326ac9e95caadd9b2"} Oct 07 12:45:01 crc kubenswrapper[5024]: I1007 12:45:01.366083 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" event={"ID":"51fa621d-3db6-4bda-b380-1c8972b3005b","Type":"ContainerStarted","Data":"452908dfe926cb6911b8e955b7f5007b9b432d7908c05b59633c6233ad047778"} Oct 07 12:45:01 crc kubenswrapper[5024]: I1007 12:45:01.385599 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" podStartSLOduration=1.385578618 podStartE2EDuration="1.385578618s" podCreationTimestamp="2025-10-07 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:01.380807578 +0000 UTC m=+1039.456594416" watchObservedRunningTime="2025-10-07 12:45:01.385578618 +0000 UTC m=+1039.461365456" Oct 07 12:45:02 crc kubenswrapper[5024]: I1007 12:45:02.375356 5024 generic.go:334] "Generic (PLEG): container finished" podID="51fa621d-3db6-4bda-b380-1c8972b3005b" containerID="6a11951ea5219702334649faecc0783ad19d417d59a8fa3326ac9e95caadd9b2" exitCode=0 Oct 07 12:45:02 crc kubenswrapper[5024]: I1007 12:45:02.375419 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" event={"ID":"51fa621d-3db6-4bda-b380-1c8972b3005b","Type":"ContainerDied","Data":"6a11951ea5219702334649faecc0783ad19d417d59a8fa3326ac9e95caadd9b2"} Oct 07 12:45:02 crc kubenswrapper[5024]: I1007 12:45:02.770381 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.663591 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.749505 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume\") pod \"51fa621d-3db6-4bda-b380-1c8972b3005b\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.749608 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume\") pod \"51fa621d-3db6-4bda-b380-1c8972b3005b\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.749678 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pw7p\" (UniqueName: \"kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p\") pod \"51fa621d-3db6-4bda-b380-1c8972b3005b\" (UID: \"51fa621d-3db6-4bda-b380-1c8972b3005b\") " Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.750201 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume" (OuterVolumeSpecName: "config-volume") pod "51fa621d-3db6-4bda-b380-1c8972b3005b" (UID: "51fa621d-3db6-4bda-b380-1c8972b3005b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.757320 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51fa621d-3db6-4bda-b380-1c8972b3005b" (UID: "51fa621d-3db6-4bda-b380-1c8972b3005b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.759059 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p" (OuterVolumeSpecName: "kube-api-access-4pw7p") pod "51fa621d-3db6-4bda-b380-1c8972b3005b" (UID: "51fa621d-3db6-4bda-b380-1c8972b3005b"). InnerVolumeSpecName "kube-api-access-4pw7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.851773 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fa621d-3db6-4bda-b380-1c8972b3005b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.852047 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51fa621d-3db6-4bda-b380-1c8972b3005b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:03 crc kubenswrapper[5024]: I1007 12:45:03.852127 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pw7p\" (UniqueName: \"kubernetes.io/projected/51fa621d-3db6-4bda-b380-1c8972b3005b-kube-api-access-4pw7p\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.010899 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6m24h"] Oct 07 12:45:04 crc kubenswrapper[5024]: E1007 12:45:04.011339 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fa621d-3db6-4bda-b380-1c8972b3005b" containerName="collect-profiles" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.011363 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fa621d-3db6-4bda-b380-1c8972b3005b" containerName="collect-profiles" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.011582 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="51fa621d-3db6-4bda-b380-1c8972b3005b" containerName="collect-profiles" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.012400 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.027259 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6m24h"] Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.054784 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7xx\" (UniqueName: \"kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx\") pod \"keystone-db-create-6m24h\" (UID: \"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d\") " pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.157330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7xx\" (UniqueName: \"kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx\") pod \"keystone-db-create-6m24h\" (UID: \"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d\") " pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.177939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7xx\" (UniqueName: \"kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx\") pod \"keystone-db-create-6m24h\" (UID: \"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d\") " pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.217091 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s9nrp"] Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.218604 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.222926 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s9nrp"] Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.259383 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6482j\" (UniqueName: \"kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j\") pod \"placement-db-create-s9nrp\" (UID: \"134d1f22-ab85-4918-9d60-3c39f1d2f66e\") " pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.338857 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.361575 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6482j\" (UniqueName: \"kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j\") pod \"placement-db-create-s9nrp\" (UID: \"134d1f22-ab85-4918-9d60-3c39f1d2f66e\") " pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.379411 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6482j\" (UniqueName: \"kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j\") pod \"placement-db-create-s9nrp\" (UID: \"134d1f22-ab85-4918-9d60-3c39f1d2f66e\") " pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.396703 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" event={"ID":"51fa621d-3db6-4bda-b380-1c8972b3005b","Type":"ContainerDied","Data":"452908dfe926cb6911b8e955b7f5007b9b432d7908c05b59633c6233ad047778"} Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.396748 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452908dfe926cb6911b8e955b7f5007b9b432d7908c05b59633c6233ad047778" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.396750 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.459434 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dgrgv"] Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.462219 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.484038 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgrgv"] Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.540432 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.564816 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbxk\" (UniqueName: \"kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk\") pod \"glance-db-create-dgrgv\" (UID: \"ec5d8fec-7318-4048-82bd-fef760cc6a57\") " pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.666963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbxk\" (UniqueName: \"kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk\") pod \"glance-db-create-dgrgv\" (UID: \"ec5d8fec-7318-4048-82bd-fef760cc6a57\") " pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.686057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbxk\" (UniqueName: \"kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk\") pod \"glance-db-create-dgrgv\" (UID: \"ec5d8fec-7318-4048-82bd-fef760cc6a57\") " pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.785301 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.812289 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6m24h"] Oct 07 12:45:04 crc kubenswrapper[5024]: W1007 12:45:04.817587 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ab3a6dc_e122_4ea7_8a9e_b6e208d5a66d.slice/crio-59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420 WatchSource:0}: Error finding container 59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420: Status 404 returned error can't find the container with id 59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420 Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.839677 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.907736 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5b493394-e353-45b2-b7a9-71b94654e2e7" containerName="galera" probeResult="failure" output=< Oct 07 12:45:04 crc kubenswrapper[5024]: wsrep_local_state_comment (Joined) differs from Synced Oct 07 12:45:04 crc kubenswrapper[5024]: > Oct 07 12:45:04 crc kubenswrapper[5024]: I1007 12:45:04.980839 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s9nrp"] Oct 07 12:45:04 crc kubenswrapper[5024]: W1007 12:45:04.983292 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod134d1f22_ab85_4918_9d60_3c39f1d2f66e.slice/crio-c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0 WatchSource:0}: Error finding container c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0: Status 404 returned error can't find the container with id c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0 Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.211877 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgrgv"] Oct 07 12:45:05 crc kubenswrapper[5024]: W1007 12:45:05.219364 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec5d8fec_7318_4048_82bd_fef760cc6a57.slice/crio-b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857 WatchSource:0}: Error finding container b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857: Status 404 returned error can't find the container with id b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857 Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.287179 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.408033 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6m24h" event={"ID":"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d","Type":"ContainerStarted","Data":"fab2cf02b8cc3e68ddff4368f2ec64e1a807a0c2bb32b3721042d51e6c6bf952"} Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.408073 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6m24h" event={"ID":"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d","Type":"ContainerStarted","Data":"59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420"} Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.409376 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrgv" event={"ID":"ec5d8fec-7318-4048-82bd-fef760cc6a57","Type":"ContainerStarted","Data":"b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857"} Oct 07 12:45:05 crc kubenswrapper[5024]: I1007 12:45:05.417820 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9nrp" event={"ID":"134d1f22-ab85-4918-9d60-3c39f1d2f66e","Type":"ContainerStarted","Data":"c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0"} Oct 07 12:45:06 crc kubenswrapper[5024]: I1007 12:45:06.428845 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9nrp" event={"ID":"134d1f22-ab85-4918-9d60-3c39f1d2f66e","Type":"ContainerStarted","Data":"d73fa9175e114156745a5c5800be1356370d31676851805968192ea8d4589dd0"} Oct 07 12:45:08 crc kubenswrapper[5024]: I1007 12:45:07.436527 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrgv" event={"ID":"ec5d8fec-7318-4048-82bd-fef760cc6a57","Type":"ContainerStarted","Data":"d7c80720f0c8f5bb76d36ef7b83db0f216df1b18106b499129ed5993b41c7c80"} Oct 07 12:45:08 crc kubenswrapper[5024]: I1007 12:45:07.456611 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-s9nrp" podStartSLOduration=3.45658559 podStartE2EDuration="3.45658559s" podCreationTimestamp="2025-10-07 12:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:07.447228526 +0000 UTC m=+1045.523015394" watchObservedRunningTime="2025-10-07 12:45:07.45658559 +0000 UTC m=+1045.532372458" Oct 07 12:45:08 crc kubenswrapper[5024]: I1007 12:45:07.467690 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6m24h" podStartSLOduration=4.467669765 podStartE2EDuration="4.467669765s" podCreationTimestamp="2025-10-07 12:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:07.459449614 +0000 UTC m=+1045.535236452" watchObservedRunningTime="2025-10-07 12:45:07.467669765 +0000 UTC m=+1045.543456643" Oct 07 12:45:08 crc kubenswrapper[5024]: I1007 12:45:07.478452 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dgrgv" podStartSLOduration=3.478427091 podStartE2EDuration="3.478427091s" podCreationTimestamp="2025-10-07 12:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:07.474030712 +0000 UTC m=+1045.549817550" watchObservedRunningTime="2025-10-07 12:45:07.478427091 +0000 UTC m=+1045.554213969" Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.471435 5024 generic.go:334] "Generic (PLEG): container finished" podID="9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" containerID="fab2cf02b8cc3e68ddff4368f2ec64e1a807a0c2bb32b3721042d51e6c6bf952" exitCode=0 Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.471559 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6m24h" event={"ID":"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d","Type":"ContainerDied","Data":"fab2cf02b8cc3e68ddff4368f2ec64e1a807a0c2bb32b3721042d51e6c6bf952"} Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.478193 5024 generic.go:334] "Generic (PLEG): container finished" podID="ec5d8fec-7318-4048-82bd-fef760cc6a57" containerID="d7c80720f0c8f5bb76d36ef7b83db0f216df1b18106b499129ed5993b41c7c80" exitCode=0 Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.478310 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrgv" event={"ID":"ec5d8fec-7318-4048-82bd-fef760cc6a57","Type":"ContainerDied","Data":"d7c80720f0c8f5bb76d36ef7b83db0f216df1b18106b499129ed5993b41c7c80"} Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.483666 5024 generic.go:334] "Generic (PLEG): container finished" podID="134d1f22-ab85-4918-9d60-3c39f1d2f66e" containerID="d73fa9175e114156745a5c5800be1356370d31676851805968192ea8d4589dd0" exitCode=0 Oct 07 12:45:11 crc kubenswrapper[5024]: I1007 12:45:11.483729 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9nrp" event={"ID":"134d1f22-ab85-4918-9d60-3c39f1d2f66e","Type":"ContainerDied","Data":"d73fa9175e114156745a5c5800be1356370d31676851805968192ea8d4589dd0"} Oct 07 12:45:12 crc kubenswrapper[5024]: I1007 12:45:12.902639 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:12 crc kubenswrapper[5024]: I1007 12:45:12.908466 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:12 crc kubenswrapper[5024]: I1007 12:45:12.923922 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.022200 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbxk\" (UniqueName: \"kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk\") pod \"ec5d8fec-7318-4048-82bd-fef760cc6a57\" (UID: \"ec5d8fec-7318-4048-82bd-fef760cc6a57\") " Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.022325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7xx\" (UniqueName: \"kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx\") pod \"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d\" (UID: \"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d\") " Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.028288 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx" (OuterVolumeSpecName: "kube-api-access-jb7xx") pod "9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" (UID: "9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d"). InnerVolumeSpecName "kube-api-access-jb7xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.031467 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk" (OuterVolumeSpecName: "kube-api-access-8sbxk") pod "ec5d8fec-7318-4048-82bd-fef760cc6a57" (UID: "ec5d8fec-7318-4048-82bd-fef760cc6a57"). InnerVolumeSpecName "kube-api-access-8sbxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.124036 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6482j\" (UniqueName: \"kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j\") pod \"134d1f22-ab85-4918-9d60-3c39f1d2f66e\" (UID: \"134d1f22-ab85-4918-9d60-3c39f1d2f66e\") " Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.125069 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7xx\" (UniqueName: \"kubernetes.io/projected/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d-kube-api-access-jb7xx\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.125094 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbxk\" (UniqueName: \"kubernetes.io/projected/ec5d8fec-7318-4048-82bd-fef760cc6a57-kube-api-access-8sbxk\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.127354 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j" (OuterVolumeSpecName: "kube-api-access-6482j") pod "134d1f22-ab85-4918-9d60-3c39f1d2f66e" (UID: "134d1f22-ab85-4918-9d60-3c39f1d2f66e"). InnerVolumeSpecName "kube-api-access-6482j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.226600 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6482j\" (UniqueName: \"kubernetes.io/projected/134d1f22-ab85-4918-9d60-3c39f1d2f66e-kube-api-access-6482j\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.498630 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrgv" event={"ID":"ec5d8fec-7318-4048-82bd-fef760cc6a57","Type":"ContainerDied","Data":"b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857"} Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.498688 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0743d113e59158e0e4ac734603cd8e8e178e6e86f9135e46599dd1c4c12b857" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.498663 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrgv" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.500760 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s9nrp" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.500815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s9nrp" event={"ID":"134d1f22-ab85-4918-9d60-3c39f1d2f66e","Type":"ContainerDied","Data":"c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0"} Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.500851 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1959674b2df12d01d31b4ea42b6ad9bfa610799ee1ca2b7bc1af53e2a59caf0" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.505805 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6m24h" event={"ID":"9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d","Type":"ContainerDied","Data":"59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420"} Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.505837 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59008e3b751996742c75d24fc91bd87962ce2c9d4e4e367665779eec256ea420" Oct 07 12:45:13 crc kubenswrapper[5024]: I1007 12:45:13.505932 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6m24h" Oct 07 12:45:14 crc kubenswrapper[5024]: I1007 12:45:14.057798 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 07 12:45:14 crc kubenswrapper[5024]: I1007 12:45:14.780868 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bs9lb" podUID="91fa5e61-2577-4fad-9b32-395eb0e5105b" containerName="ovn-controller" probeResult="failure" output=< Oct 07 12:45:14 crc kubenswrapper[5024]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 07 12:45:14 crc kubenswrapper[5024]: > Oct 07 12:45:18 crc kubenswrapper[5024]: I1007 12:45:18.563534 5024 generic.go:334] "Generic (PLEG): container finished" podID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerID="69ff3a815cfcebe3b00f8e6f065090d3b8dd1b06444974727f4605fb76e990d5" exitCode=0 Oct 07 12:45:18 crc kubenswrapper[5024]: I1007 12:45:18.563648 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerDied","Data":"69ff3a815cfcebe3b00f8e6f065090d3b8dd1b06444974727f4605fb76e990d5"} Oct 07 12:45:18 crc kubenswrapper[5024]: I1007 12:45:18.567768 5024 generic.go:334] "Generic (PLEG): container finished" podID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerID="6a80969d05de5dc169894d66537946fa6ea76bfcb60e67f286b3ce277588a810" exitCode=0 Oct 07 12:45:18 crc kubenswrapper[5024]: I1007 12:45:18.567814 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerDied","Data":"6a80969d05de5dc169894d66537946fa6ea76bfcb60e67f286b3ce277588a810"} Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.578466 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerStarted","Data":"089816b26a36af7d0c2d6d18c92f22f1f583ffe9022d4ee2660443bbb0d8ce65"} Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.579243 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.581765 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerStarted","Data":"c0a0b0343fd64d8e25099cce68699244689d9756420b81adadf059d849991bce"} Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.582044 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.636769 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.75455848 podStartE2EDuration="1m0.636745454s" podCreationTimestamp="2025-10-07 12:44:19 +0000 UTC" firstStartedPulling="2025-10-07 12:44:33.441005856 +0000 UTC m=+1011.516792694" lastFinishedPulling="2025-10-07 12:44:42.32319283 +0000 UTC m=+1020.398979668" observedRunningTime="2025-10-07 12:45:19.621460025 +0000 UTC m=+1057.697246873" watchObservedRunningTime="2025-10-07 12:45:19.636745454 +0000 UTC m=+1057.712532292" Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.669860 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.350746945 podStartE2EDuration="1m0.669843414s" podCreationTimestamp="2025-10-07 12:44:19 +0000 UTC" firstStartedPulling="2025-10-07 12:44:32.87003244 +0000 UTC m=+1010.945819278" lastFinishedPulling="2025-10-07 12:44:42.189128909 +0000 UTC m=+1020.264915747" observedRunningTime="2025-10-07 12:45:19.667703892 +0000 UTC m=+1057.743490730" watchObservedRunningTime="2025-10-07 12:45:19.669843414 +0000 UTC m=+1057.745630252" Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.844830 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bs9lb" podUID="91fa5e61-2577-4fad-9b32-395eb0e5105b" containerName="ovn-controller" probeResult="failure" output=< Oct 07 12:45:19 crc kubenswrapper[5024]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 07 12:45:19 crc kubenswrapper[5024]: > Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.849892 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:45:19 crc kubenswrapper[5024]: I1007 12:45:19.871230 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q2ktf" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.085404 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bs9lb-config-2b4r7"] Oct 07 12:45:20 crc kubenswrapper[5024]: E1007 12:45:20.086042 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086164 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: E1007 12:45:20.086244 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134d1f22-ab85-4918-9d60-3c39f1d2f66e" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086312 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d1f22-ab85-4918-9d60-3c39f1d2f66e" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: E1007 12:45:20.086390 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5d8fec-7318-4048-82bd-fef760cc6a57" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086445 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5d8fec-7318-4048-82bd-fef760cc6a57" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086685 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="134d1f22-ab85-4918-9d60-3c39f1d2f66e" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086769 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5d8fec-7318-4048-82bd-fef760cc6a57" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.086827 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" containerName="mariadb-database-create" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.087440 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.089492 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.104404 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb-config-2b4r7"] Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.141403 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.142000 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqt8\" (UniqueName: \"kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.142126 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.142283 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.142399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.142543 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.244594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245356 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnqt8\" (UniqueName: \"kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.244937 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245675 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245780 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.245846 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.246321 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.247382 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.272053 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnqt8\" (UniqueName: \"kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8\") pod \"ovn-controller-bs9lb-config-2b4r7\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.408738 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:20 crc kubenswrapper[5024]: I1007 12:45:20.708552 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb-config-2b4r7"] Oct 07 12:45:21 crc kubenswrapper[5024]: I1007 12:45:21.601181 5024 generic.go:334] "Generic (PLEG): container finished" podID="08815fc8-67e1-45c7-8837-9cb56d38f709" containerID="19a0724a20f9cbc32e62f5106339a5004204577740d0c4d13b542b15b6459785" exitCode=0 Oct 07 12:45:21 crc kubenswrapper[5024]: I1007 12:45:21.601325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-2b4r7" event={"ID":"08815fc8-67e1-45c7-8837-9cb56d38f709","Type":"ContainerDied","Data":"19a0724a20f9cbc32e62f5106339a5004204577740d0c4d13b542b15b6459785"} Oct 07 12:45:21 crc kubenswrapper[5024]: I1007 12:45:21.601532 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-2b4r7" event={"ID":"08815fc8-67e1-45c7-8837-9cb56d38f709","Type":"ContainerStarted","Data":"0eff31e5f296dc57eac23416c225c127f44a72262bf415b60160cb6294fde064"} Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.975421 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.994824 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.994905 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnqt8\" (UniqueName: \"kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.994943 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995025 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995060 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995095 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn\") pod \"08815fc8-67e1-45c7-8837-9cb56d38f709\" (UID: \"08815fc8-67e1-45c7-8837-9cb56d38f709\") " Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995169 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995239 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run" (OuterVolumeSpecName: "var-run") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995327 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995695 5024 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995712 5024 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995724 5024 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08815fc8-67e1-45c7-8837-9cb56d38f709-var-run\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995717 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:22 crc kubenswrapper[5024]: I1007 12:45:22.995952 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts" (OuterVolumeSpecName: "scripts") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.001903 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8" (OuterVolumeSpecName: "kube-api-access-jnqt8") pod "08815fc8-67e1-45c7-8837-9cb56d38f709" (UID: "08815fc8-67e1-45c7-8837-9cb56d38f709"). InnerVolumeSpecName "kube-api-access-jnqt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.097254 5024 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.097289 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnqt8\" (UniqueName: \"kubernetes.io/projected/08815fc8-67e1-45c7-8837-9cb56d38f709-kube-api-access-jnqt8\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.097301 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08815fc8-67e1-45c7-8837-9cb56d38f709-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.617323 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-2b4r7" event={"ID":"08815fc8-67e1-45c7-8837-9cb56d38f709","Type":"ContainerDied","Data":"0eff31e5f296dc57eac23416c225c127f44a72262bf415b60160cb6294fde064"} Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.617641 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eff31e5f296dc57eac23416c225c127f44a72262bf415b60160cb6294fde064" Oct 07 12:45:23 crc kubenswrapper[5024]: I1007 12:45:23.617410 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-2b4r7" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.054228 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2510-account-create-z66gk"] Oct 07 12:45:24 crc kubenswrapper[5024]: E1007 12:45:24.054648 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08815fc8-67e1-45c7-8837-9cb56d38f709" containerName="ovn-config" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.054664 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="08815fc8-67e1-45c7-8837-9cb56d38f709" containerName="ovn-config" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.054860 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="08815fc8-67e1-45c7-8837-9cb56d38f709" containerName="ovn-config" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.055499 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:24 crc kubenswrapper[5024]: W1007 12:45:24.057913 5024 reflector.go:561] object-"openstack"/"keystone-db-secret": failed to list *v1.Secret: secrets "keystone-db-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 07 12:45:24 crc kubenswrapper[5024]: E1007 12:45:24.057967 5024 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-db-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-db-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.063394 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2510-account-create-z66gk"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.110853 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdr9\" (UniqueName: \"kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9\") pod \"keystone-2510-account-create-z66gk\" (UID: \"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470\") " pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.126900 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bs9lb-config-2b4r7"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.134964 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bs9lb-config-2b4r7"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.212489 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdr9\" (UniqueName: \"kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9\") pod \"keystone-2510-account-create-z66gk\" (UID: \"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470\") " pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.219666 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bs9lb-config-5vhp4"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.220865 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.227082 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.233438 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb-config-5vhp4"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.242961 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdr9\" (UniqueName: \"kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9\") pod \"keystone-2510-account-create-z66gk\" (UID: \"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470\") " pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314184 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314261 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkmq\" (UniqueName: \"kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314281 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314364 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314411 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.314429 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.361181 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ed35-account-create-d6lw4"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.362521 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.365387 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.380784 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ed35-account-create-d6lw4"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.409403 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415515 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tkmq\" (UniqueName: \"kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415624 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nct2m\" (UniqueName: \"kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m\") pod \"placement-ed35-account-create-d6lw4\" (UID: \"2eab15ce-c0a2-44d0-822e-f3b1ca4de908\") " pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.415772 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.416549 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.416562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.416943 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.417186 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.417599 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.434975 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tkmq\" (UniqueName: \"kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq\") pod \"ovn-controller-bs9lb-config-5vhp4\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.517152 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nct2m\" (UniqueName: \"kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m\") pod \"placement-ed35-account-create-d6lw4\" (UID: \"2eab15ce-c0a2-44d0-822e-f3b1ca4de908\") " pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.540944 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.542825 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nct2m\" (UniqueName: \"kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m\") pod \"placement-ed35-account-create-d6lw4\" (UID: \"2eab15ce-c0a2-44d0-822e-f3b1ca4de908\") " pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.575429 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5f9a-account-create-wsc9c"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.576587 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.586652 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.596502 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5f9a-account-create-wsc9c"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.683315 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.723674 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv\") pod \"glance-5f9a-account-create-wsc9c\" (UID: \"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61\") " pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.775550 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08815fc8-67e1-45c7-8837-9cb56d38f709" path="/var/lib/kubelet/pods/08815fc8-67e1-45c7-8837-9cb56d38f709/volumes" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.805032 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bs9lb" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.830172 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv\") pod \"glance-5f9a-account-create-wsc9c\" (UID: \"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61\") " pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.856812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv\") pod \"glance-5f9a-account-create-wsc9c\" (UID: \"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61\") " pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.933349 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2510-account-create-z66gk"] Oct 07 12:45:24 crc kubenswrapper[5024]: I1007 12:45:24.949168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.156339 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bs9lb-config-5vhp4"] Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.194421 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ed35-account-create-d6lw4"] Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.387851 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5f9a-account-create-wsc9c"] Oct 07 12:45:25 crc kubenswrapper[5024]: W1007 12:45:25.397885 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdbfdf7_cc14_4ff3_bd1a_6475b8f4ca61.slice/crio-719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979 WatchSource:0}: Error finding container 719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979: Status 404 returned error can't find the container with id 719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979 Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.639997 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.659366 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ed35-account-create-d6lw4" event={"ID":"2eab15ce-c0a2-44d0-822e-f3b1ca4de908","Type":"ContainerStarted","Data":"1dae479b3a729849c72bd1a3a39923490522039353db4ae65cb95a4249eb31b2"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.659410 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ed35-account-create-d6lw4" event={"ID":"2eab15ce-c0a2-44d0-822e-f3b1ca4de908","Type":"ContainerStarted","Data":"03dd0e43006c44d31944800e7e391904d6f80ea368d4a808656eeb2c4cdea1fd"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.663392 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2510-account-create-z66gk" event={"ID":"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470","Type":"ContainerStarted","Data":"5855674d747459b6fcae0b1eb8372f80824d18f376d041839368923b43ecb31c"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.665400 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-5vhp4" event={"ID":"72e3c15c-6bac-41e4-87c4-13fa2e9836f7","Type":"ContainerStarted","Data":"25c89f56f78b8b88cd2c3df3f916d763412e1939008df36aaafee4fc23cd252c"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.665429 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-5vhp4" event={"ID":"72e3c15c-6bac-41e4-87c4-13fa2e9836f7","Type":"ContainerStarted","Data":"c06e5fba2d062cd50c1241d464e84df6d7dadaa8071c7df97435edc5a78097ea"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.666659 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f9a-account-create-wsc9c" event={"ID":"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61","Type":"ContainerStarted","Data":"77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.666688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f9a-account-create-wsc9c" event={"ID":"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61","Type":"ContainerStarted","Data":"719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979"} Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.674771 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ed35-account-create-d6lw4" podStartSLOduration=1.674754338 podStartE2EDuration="1.674754338s" podCreationTimestamp="2025-10-07 12:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:25.673111299 +0000 UTC m=+1063.748898127" watchObservedRunningTime="2025-10-07 12:45:25.674754338 +0000 UTC m=+1063.750541176" Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.714725 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bs9lb-config-5vhp4" podStartSLOduration=1.7147105599999999 podStartE2EDuration="1.71471056s" podCreationTimestamp="2025-10-07 12:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:25.695951119 +0000 UTC m=+1063.771737957" watchObservedRunningTime="2025-10-07 12:45:25.71471056 +0000 UTC m=+1063.790497398" Oct 07 12:45:25 crc kubenswrapper[5024]: I1007 12:45:25.717531 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5f9a-account-create-wsc9c" podStartSLOduration=1.717515602 podStartE2EDuration="1.717515602s" podCreationTimestamp="2025-10-07 12:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:25.712550476 +0000 UTC m=+1063.788337314" watchObservedRunningTime="2025-10-07 12:45:25.717515602 +0000 UTC m=+1063.793302440" Oct 07 12:45:26 crc kubenswrapper[5024]: E1007 12:45:26.156292 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdbfdf7_cc14_4ff3_bd1a_6475b8f4ca61.slice/crio-77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475c1ed6_4adb_4aa3_bb17_f4a41d8a7470.slice/crio-conmon-b439e414ae7c91dce7e818c6914f63349936a3c84a0339e3c63182a312547abc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdbfdf7_cc14_4ff3_bd1a_6475b8f4ca61.slice/crio-conmon-77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37.scope\": RecentStats: unable to find data in memory cache]" Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.676892 5024 generic.go:334] "Generic (PLEG): container finished" podID="72e3c15c-6bac-41e4-87c4-13fa2e9836f7" containerID="25c89f56f78b8b88cd2c3df3f916d763412e1939008df36aaafee4fc23cd252c" exitCode=0 Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.677669 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-5vhp4" event={"ID":"72e3c15c-6bac-41e4-87c4-13fa2e9836f7","Type":"ContainerDied","Data":"25c89f56f78b8b88cd2c3df3f916d763412e1939008df36aaafee4fc23cd252c"} Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.679484 5024 generic.go:334] "Generic (PLEG): container finished" podID="afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" containerID="77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37" exitCode=0 Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.679581 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f9a-account-create-wsc9c" event={"ID":"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61","Type":"ContainerDied","Data":"77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37"} Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.681407 5024 generic.go:334] "Generic (PLEG): container finished" podID="2eab15ce-c0a2-44d0-822e-f3b1ca4de908" containerID="1dae479b3a729849c72bd1a3a39923490522039353db4ae65cb95a4249eb31b2" exitCode=0 Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.681486 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ed35-account-create-d6lw4" event={"ID":"2eab15ce-c0a2-44d0-822e-f3b1ca4de908","Type":"ContainerDied","Data":"1dae479b3a729849c72bd1a3a39923490522039353db4ae65cb95a4249eb31b2"} Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.683317 5024 generic.go:334] "Generic (PLEG): container finished" podID="475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" containerID="b439e414ae7c91dce7e818c6914f63349936a3c84a0339e3c63182a312547abc" exitCode=0 Oct 07 12:45:26 crc kubenswrapper[5024]: I1007 12:45:26.683371 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2510-account-create-z66gk" event={"ID":"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470","Type":"ContainerDied","Data":"b439e414ae7c91dce7e818c6914f63349936a3c84a0339e3c63182a312547abc"} Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.066956 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.172749 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.183597 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv\") pod \"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61\" (UID: \"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.189985 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.190761 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.191406 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv" (OuterVolumeSpecName: "kube-api-access-wr7lv") pod "afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" (UID: "afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61"). InnerVolumeSpecName "kube-api-access-wr7lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285105 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nct2m\" (UniqueName: \"kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m\") pod \"2eab15ce-c0a2-44d0-822e-f3b1ca4de908\" (UID: \"2eab15ce-c0a2-44d0-822e-f3b1ca4de908\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285183 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285217 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btdr9\" (UniqueName: \"kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9\") pod \"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470\" (UID: \"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285272 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285297 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285323 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285357 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tkmq\" (UniqueName: \"kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285375 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285409 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run\") pod \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\" (UID: \"72e3c15c-6bac-41e4-87c4-13fa2e9836f7\") " Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285428 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285735 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61-kube-api-access-wr7lv\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285749 5024 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285757 5024 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285802 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run" (OuterVolumeSpecName: "var-run") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.285958 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.286118 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts" (OuterVolumeSpecName: "scripts") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.288184 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m" (OuterVolumeSpecName: "kube-api-access-nct2m") pod "2eab15ce-c0a2-44d0-822e-f3b1ca4de908" (UID: "2eab15ce-c0a2-44d0-822e-f3b1ca4de908"). InnerVolumeSpecName "kube-api-access-nct2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.288782 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9" (OuterVolumeSpecName: "kube-api-access-btdr9") pod "475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" (UID: "475c1ed6-4adb-4aa3-bb17-f4a41d8a7470"). InnerVolumeSpecName "kube-api-access-btdr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.289291 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq" (OuterVolumeSpecName: "kube-api-access-4tkmq") pod "72e3c15c-6bac-41e4-87c4-13fa2e9836f7" (UID: "72e3c15c-6bac-41e4-87c4-13fa2e9836f7"). InnerVolumeSpecName "kube-api-access-4tkmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386881 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nct2m\" (UniqueName: \"kubernetes.io/projected/2eab15ce-c0a2-44d0-822e-f3b1ca4de908-kube-api-access-nct2m\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386924 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386940 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btdr9\" (UniqueName: \"kubernetes.io/projected/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470-kube-api-access-btdr9\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386957 5024 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386973 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tkmq\" (UniqueName: \"kubernetes.io/projected/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-kube-api-access-4tkmq\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.386990 5024 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72e3c15c-6bac-41e4-87c4-13fa2e9836f7-var-run\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.704169 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2510-account-create-z66gk" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.704187 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2510-account-create-z66gk" event={"ID":"475c1ed6-4adb-4aa3-bb17-f4a41d8a7470","Type":"ContainerDied","Data":"5855674d747459b6fcae0b1eb8372f80824d18f376d041839368923b43ecb31c"} Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.704236 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5855674d747459b6fcae0b1eb8372f80824d18f376d041839368923b43ecb31c" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.706905 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ed35-account-create-d6lw4" event={"ID":"2eab15ce-c0a2-44d0-822e-f3b1ca4de908","Type":"ContainerDied","Data":"03dd0e43006c44d31944800e7e391904d6f80ea368d4a808656eeb2c4cdea1fd"} Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.706955 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ed35-account-create-d6lw4" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.706975 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03dd0e43006c44d31944800e7e391904d6f80ea368d4a808656eeb2c4cdea1fd" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.708509 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bs9lb-config-5vhp4" event={"ID":"72e3c15c-6bac-41e4-87c4-13fa2e9836f7","Type":"ContainerDied","Data":"c06e5fba2d062cd50c1241d464e84df6d7dadaa8071c7df97435edc5a78097ea"} Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.708535 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bs9lb-config-5vhp4" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.708552 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06e5fba2d062cd50c1241d464e84df6d7dadaa8071c7df97435edc5a78097ea" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.719130 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5f9a-account-create-wsc9c" event={"ID":"afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61","Type":"ContainerDied","Data":"719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979"} Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.719285 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="719370d7c88c490185d2abb74f4d14920eb33dde04ab6944a0163c0e2d98e979" Oct 07 12:45:28 crc kubenswrapper[5024]: I1007 12:45:28.719386 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5f9a-account-create-wsc9c" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.263769 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bs9lb-config-5vhp4"] Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.269216 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bs9lb-config-5vhp4"] Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.813453 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cvp9t"] Oct 07 12:45:29 crc kubenswrapper[5024]: E1007 12:45:29.814642 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.814671 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: E1007 12:45:29.814701 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eab15ce-c0a2-44d0-822e-f3b1ca4de908" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.814710 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eab15ce-c0a2-44d0-822e-f3b1ca4de908" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: E1007 12:45:29.814748 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e3c15c-6bac-41e4-87c4-13fa2e9836f7" containerName="ovn-config" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.814762 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e3c15c-6bac-41e4-87c4-13fa2e9836f7" containerName="ovn-config" Oct 07 12:45:29 crc kubenswrapper[5024]: E1007 12:45:29.814789 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.814799 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.815345 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e3c15c-6bac-41e4-87c4-13fa2e9836f7" containerName="ovn-config" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.815388 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eab15ce-c0a2-44d0-822e-f3b1ca4de908" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.815414 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.815447 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" containerName="mariadb-account-create" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.820578 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.826604 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k8xkf" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.829641 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.843329 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cvp9t"] Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.922509 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.922670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.922747 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:29 crc kubenswrapper[5024]: I1007 12:45:29.923066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbh6\" (UniqueName: \"kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.024581 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.024695 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbh6\" (UniqueName: \"kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.024757 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.024791 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.033102 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.033215 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.038198 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.039846 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbh6\" (UniqueName: \"kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6\") pod \"glance-db-sync-cvp9t\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.152367 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cvp9t" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.459532 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:45:30 crc kubenswrapper[5024]: W1007 12:45:30.474505 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b57d44_185e_4645_9078_4deb8da00531.slice/crio-eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a WatchSource:0}: Error finding container eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a: Status 404 returned error can't find the container with id eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.477880 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cvp9t"] Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.738104 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cvp9t" event={"ID":"36b57d44-185e-4645-9078-4deb8da00531","Type":"ContainerStarted","Data":"eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a"} Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.755697 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Oct 07 12:45:30 crc kubenswrapper[5024]: I1007 12:45:30.760731 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e3c15c-6bac-41e4-87c4-13fa2e9836f7" path="/var/lib/kubelet/pods/72e3c15c-6bac-41e4-87c4-13fa2e9836f7/volumes" Oct 07 12:45:40 crc kubenswrapper[5024]: I1007 12:45:40.780529 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.081586 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bpd9r"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.082635 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.094487 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bpd9r"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.178591 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jt8lj"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.179923 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.195832 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jt8lj"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.217346 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lfb\" (UniqueName: \"kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb\") pod \"cinder-db-create-bpd9r\" (UID: \"318e6758-8658-4a44-a89e-c663cb02d9f8\") " pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.319274 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrjt\" (UniqueName: \"kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt\") pod \"barbican-db-create-jt8lj\" (UID: \"2e7d4a23-1940-46f1-9555-dc1eb0154137\") " pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.319374 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lfb\" (UniqueName: \"kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb\") pod \"cinder-db-create-bpd9r\" (UID: \"318e6758-8658-4a44-a89e-c663cb02d9f8\") " pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.337374 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lfb\" (UniqueName: \"kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb\") pod \"cinder-db-create-bpd9r\" (UID: \"318e6758-8658-4a44-a89e-c663cb02d9f8\") " pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.393489 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qq5t7"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.394685 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.401838 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.407186 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qq5t7"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.420454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktrjt\" (UniqueName: \"kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt\") pod \"barbican-db-create-jt8lj\" (UID: \"2e7d4a23-1940-46f1-9555-dc1eb0154137\") " pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.464941 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktrjt\" (UniqueName: \"kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt\") pod \"barbican-db-create-jt8lj\" (UID: \"2e7d4a23-1940-46f1-9555-dc1eb0154137\") " pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.497534 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.522335 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6p7q\" (UniqueName: \"kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q\") pod \"neutron-db-create-qq5t7\" (UID: \"44ac3513-8a23-45c3-a80e-de4304c2f967\") " pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.568106 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-csbb9"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.569514 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.572612 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtrr5" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.574689 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.574900 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.578092 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.591995 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-csbb9"] Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.624190 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6p7q\" (UniqueName: \"kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q\") pod \"neutron-db-create-qq5t7\" (UID: \"44ac3513-8a23-45c3-a80e-de4304c2f967\") " pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.640618 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6p7q\" (UniqueName: \"kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q\") pod \"neutron-db-create-qq5t7\" (UID: \"44ac3513-8a23-45c3-a80e-de4304c2f967\") " pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.718013 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.725308 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.725506 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.725670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfzj\" (UniqueName: \"kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.828345 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.828403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.828481 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkfzj\" (UniqueName: \"kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.832628 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.833410 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.845882 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkfzj\" (UniqueName: \"kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj\") pod \"keystone-db-sync-csbb9\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:41 crc kubenswrapper[5024]: I1007 12:45:41.895933 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.679992 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-csbb9"] Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.686213 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bpd9r"] Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.692844 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qq5t7"] Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.812703 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jt8lj"] Oct 07 12:45:43 crc kubenswrapper[5024]: W1007 12:45:43.813966 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e7d4a23_1940_46f1_9555_dc1eb0154137.slice/crio-ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb WatchSource:0}: Error finding container ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb: Status 404 returned error can't find the container with id ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.881987 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csbb9" event={"ID":"0f2b23db-8959-4c57-bb68-0823d7c75a17","Type":"ContainerStarted","Data":"7d131ac4583a1fdfae126dd76a7e04b94a6be8b9f6c2b5ee7383d31ebaab81c6"} Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.883630 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qq5t7" event={"ID":"44ac3513-8a23-45c3-a80e-de4304c2f967","Type":"ContainerStarted","Data":"65d4b877e404b0da0eff905fab8a5b5b9eba46915d77b6cef37f149acfa12236"} Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.884983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jt8lj" event={"ID":"2e7d4a23-1940-46f1-9555-dc1eb0154137","Type":"ContainerStarted","Data":"ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb"} Oct 07 12:45:43 crc kubenswrapper[5024]: I1007 12:45:43.885963 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bpd9r" event={"ID":"318e6758-8658-4a44-a89e-c663cb02d9f8","Type":"ContainerStarted","Data":"d854f9136f9cd4dbb70ecb524746658fc5e2fd0b2e71ee6c58501e602c8dffb5"} Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.900487 5024 generic.go:334] "Generic (PLEG): container finished" podID="2e7d4a23-1940-46f1-9555-dc1eb0154137" containerID="6e10551055ee44a3d93397257dbecad0300326da14a1064b1369159915e9f866" exitCode=0 Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.900619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jt8lj" event={"ID":"2e7d4a23-1940-46f1-9555-dc1eb0154137","Type":"ContainerDied","Data":"6e10551055ee44a3d93397257dbecad0300326da14a1064b1369159915e9f866"} Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.907562 5024 generic.go:334] "Generic (PLEG): container finished" podID="318e6758-8658-4a44-a89e-c663cb02d9f8" containerID="1b5f3f879ee298b0e215916b1924fedfb6651c8ff1da9920346699e5acd0defc" exitCode=0 Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.907652 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bpd9r" event={"ID":"318e6758-8658-4a44-a89e-c663cb02d9f8","Type":"ContainerDied","Data":"1b5f3f879ee298b0e215916b1924fedfb6651c8ff1da9920346699e5acd0defc"} Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.912189 5024 generic.go:334] "Generic (PLEG): container finished" podID="44ac3513-8a23-45c3-a80e-de4304c2f967" containerID="00554f87e0cc42c2324f8557cca0b3338cfe6514266dac6c4e6588726c7fc4d5" exitCode=0 Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.912262 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qq5t7" event={"ID":"44ac3513-8a23-45c3-a80e-de4304c2f967","Type":"ContainerDied","Data":"00554f87e0cc42c2324f8557cca0b3338cfe6514266dac6c4e6588726c7fc4d5"} Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.915384 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cvp9t" event={"ID":"36b57d44-185e-4645-9078-4deb8da00531","Type":"ContainerStarted","Data":"c8bb8b6c3282f6fb85e0525e2327bf96b3267d635d8e8cf466c9ce6e2a5e7ba4"} Oct 07 12:45:44 crc kubenswrapper[5024]: I1007 12:45:44.977883 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cvp9t" podStartSLOduration=2.948842536 podStartE2EDuration="15.977866356s" podCreationTimestamp="2025-10-07 12:45:29 +0000 UTC" firstStartedPulling="2025-10-07 12:45:30.476440061 +0000 UTC m=+1068.552226899" lastFinishedPulling="2025-10-07 12:45:43.505463881 +0000 UTC m=+1081.581250719" observedRunningTime="2025-10-07 12:45:44.970084908 +0000 UTC m=+1083.045871746" watchObservedRunningTime="2025-10-07 12:45:44.977866356 +0000 UTC m=+1083.053653194" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.284807 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.408906 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.416726 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.426105 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktrjt\" (UniqueName: \"kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt\") pod \"2e7d4a23-1940-46f1-9555-dc1eb0154137\" (UID: \"2e7d4a23-1940-46f1-9555-dc1eb0154137\") " Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.433694 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt" (OuterVolumeSpecName: "kube-api-access-ktrjt") pod "2e7d4a23-1940-46f1-9555-dc1eb0154137" (UID: "2e7d4a23-1940-46f1-9555-dc1eb0154137"). InnerVolumeSpecName "kube-api-access-ktrjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.527257 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6p7q\" (UniqueName: \"kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q\") pod \"44ac3513-8a23-45c3-a80e-de4304c2f967\" (UID: \"44ac3513-8a23-45c3-a80e-de4304c2f967\") " Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.527440 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68lfb\" (UniqueName: \"kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb\") pod \"318e6758-8658-4a44-a89e-c663cb02d9f8\" (UID: \"318e6758-8658-4a44-a89e-c663cb02d9f8\") " Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.527782 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktrjt\" (UniqueName: \"kubernetes.io/projected/2e7d4a23-1940-46f1-9555-dc1eb0154137-kube-api-access-ktrjt\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.530391 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb" (OuterVolumeSpecName: "kube-api-access-68lfb") pod "318e6758-8658-4a44-a89e-c663cb02d9f8" (UID: "318e6758-8658-4a44-a89e-c663cb02d9f8"). InnerVolumeSpecName "kube-api-access-68lfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.530518 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q" (OuterVolumeSpecName: "kube-api-access-z6p7q") pod "44ac3513-8a23-45c3-a80e-de4304c2f967" (UID: "44ac3513-8a23-45c3-a80e-de4304c2f967"). InnerVolumeSpecName "kube-api-access-z6p7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.629526 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68lfb\" (UniqueName: \"kubernetes.io/projected/318e6758-8658-4a44-a89e-c663cb02d9f8-kube-api-access-68lfb\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.629567 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6p7q\" (UniqueName: \"kubernetes.io/projected/44ac3513-8a23-45c3-a80e-de4304c2f967-kube-api-access-z6p7q\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.933321 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jt8lj" event={"ID":"2e7d4a23-1940-46f1-9555-dc1eb0154137","Type":"ContainerDied","Data":"ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb"} Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.933651 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9b06aca51f3bafeab8a39fc5577de0bb1cccb66fa99db311a6ef7f721eefbb" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.933357 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jt8lj" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.936180 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bpd9r" event={"ID":"318e6758-8658-4a44-a89e-c663cb02d9f8","Type":"ContainerDied","Data":"d854f9136f9cd4dbb70ecb524746658fc5e2fd0b2e71ee6c58501e602c8dffb5"} Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.936344 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d854f9136f9cd4dbb70ecb524746658fc5e2fd0b2e71ee6c58501e602c8dffb5" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.936358 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bpd9r" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.937947 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qq5t7" event={"ID":"44ac3513-8a23-45c3-a80e-de4304c2f967","Type":"ContainerDied","Data":"65d4b877e404b0da0eff905fab8a5b5b9eba46915d77b6cef37f149acfa12236"} Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.938060 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d4b877e404b0da0eff905fab8a5b5b9eba46915d77b6cef37f149acfa12236" Oct 07 12:45:46 crc kubenswrapper[5024]: I1007 12:45:46.938001 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qq5t7" Oct 07 12:45:49 crc kubenswrapper[5024]: I1007 12:45:49.961090 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csbb9" event={"ID":"0f2b23db-8959-4c57-bb68-0823d7c75a17","Type":"ContainerStarted","Data":"cd977951a2da68440677fe4ce3ddec1236709987b09d6ccd5b9d08f1f7007fca"} Oct 07 12:45:49 crc kubenswrapper[5024]: I1007 12:45:49.980847 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-csbb9" podStartSLOduration=3.44709775 podStartE2EDuration="8.980829513s" podCreationTimestamp="2025-10-07 12:45:41 +0000 UTC" firstStartedPulling="2025-10-07 12:45:43.686723438 +0000 UTC m=+1081.762510266" lastFinishedPulling="2025-10-07 12:45:49.220455171 +0000 UTC m=+1087.296242029" observedRunningTime="2025-10-07 12:45:49.977877796 +0000 UTC m=+1088.053664634" watchObservedRunningTime="2025-10-07 12:45:49.980829513 +0000 UTC m=+1088.056616351" Oct 07 12:45:52 crc kubenswrapper[5024]: I1007 12:45:52.989605 5024 generic.go:334] "Generic (PLEG): container finished" podID="0f2b23db-8959-4c57-bb68-0823d7c75a17" containerID="cd977951a2da68440677fe4ce3ddec1236709987b09d6ccd5b9d08f1f7007fca" exitCode=0 Oct 07 12:45:52 crc kubenswrapper[5024]: I1007 12:45:52.989650 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csbb9" event={"ID":"0f2b23db-8959-4c57-bb68-0823d7c75a17","Type":"ContainerDied","Data":"cd977951a2da68440677fe4ce3ddec1236709987b09d6ccd5b9d08f1f7007fca"} Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.351793 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.384091 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data\") pod \"0f2b23db-8959-4c57-bb68-0823d7c75a17\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.384889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle\") pod \"0f2b23db-8959-4c57-bb68-0823d7c75a17\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.384973 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkfzj\" (UniqueName: \"kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj\") pod \"0f2b23db-8959-4c57-bb68-0823d7c75a17\" (UID: \"0f2b23db-8959-4c57-bb68-0823d7c75a17\") " Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.394107 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj" (OuterVolumeSpecName: "kube-api-access-vkfzj") pod "0f2b23db-8959-4c57-bb68-0823d7c75a17" (UID: "0f2b23db-8959-4c57-bb68-0823d7c75a17"). InnerVolumeSpecName "kube-api-access-vkfzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.420912 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f2b23db-8959-4c57-bb68-0823d7c75a17" (UID: "0f2b23db-8959-4c57-bb68-0823d7c75a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.428760 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data" (OuterVolumeSpecName: "config-data") pod "0f2b23db-8959-4c57-bb68-0823d7c75a17" (UID: "0f2b23db-8959-4c57-bb68-0823d7c75a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.487108 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.487177 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkfzj\" (UniqueName: \"kubernetes.io/projected/0f2b23db-8959-4c57-bb68-0823d7c75a17-kube-api-access-vkfzj\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:54 crc kubenswrapper[5024]: I1007 12:45:54.487198 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2b23db-8959-4c57-bb68-0823d7c75a17-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.006658 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csbb9" event={"ID":"0f2b23db-8959-4c57-bb68-0823d7c75a17","Type":"ContainerDied","Data":"7d131ac4583a1fdfae126dd76a7e04b94a6be8b9f6c2b5ee7383d31ebaab81c6"} Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.006714 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d131ac4583a1fdfae126dd76a7e04b94a6be8b9f6c2b5ee7383d31ebaab81c6" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.006733 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csbb9" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.266554 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:45:55 crc kubenswrapper[5024]: E1007 12:45:55.266875 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7d4a23-1940-46f1-9555-dc1eb0154137" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.266895 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7d4a23-1940-46f1-9555-dc1eb0154137" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: E1007 12:45:55.266913 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2b23db-8959-4c57-bb68-0823d7c75a17" containerName="keystone-db-sync" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.266920 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2b23db-8959-4c57-bb68-0823d7c75a17" containerName="keystone-db-sync" Oct 07 12:45:55 crc kubenswrapper[5024]: E1007 12:45:55.266933 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318e6758-8658-4a44-a89e-c663cb02d9f8" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.266940 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="318e6758-8658-4a44-a89e-c663cb02d9f8" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: E1007 12:45:55.266963 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ac3513-8a23-45c3-a80e-de4304c2f967" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.266969 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ac3513-8a23-45c3-a80e-de4304c2f967" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.269008 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="318e6758-8658-4a44-a89e-c663cb02d9f8" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.269028 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ac3513-8a23-45c3-a80e-de4304c2f967" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.269048 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7d4a23-1940-46f1-9555-dc1eb0154137" containerName="mariadb-database-create" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.269060 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2b23db-8959-4c57-bb68-0823d7c75a17" containerName="keystone-db-sync" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.269898 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.295212 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.316073 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hhprc"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.317424 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.321006 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.321566 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtrr5" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.321695 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.321852 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.335623 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhprc"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.398156 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.398202 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.398357 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdklw\" (UniqueName: \"kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.398382 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.398510 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.450782 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.453026 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.457960 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.458165 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.506456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.506789 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.506865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww6lh\" (UniqueName: \"kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.506890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.506912 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdklw\" (UniqueName: \"kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507042 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507120 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507406 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507438 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507515 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.507763 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.508470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.508733 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.509564 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.509633 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.563930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdklw\" (UniqueName: \"kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw\") pod \"dnsmasq-dns-75bb4695fc-nw9lx\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.600945 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619299 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619358 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7jg\" (UniqueName: \"kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619379 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619419 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619458 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619498 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619512 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619528 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.619554 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.620344 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.620421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww6lh\" (UniqueName: \"kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.620452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.623762 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.625556 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8vb4z"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.626935 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.627285 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.631867 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-szxzk" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.632241 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.632429 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.633165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.638904 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.646850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.649992 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.665463 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww6lh\" (UniqueName: \"kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh\") pod \"keystone-bootstrap-hhprc\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.681301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8vb4z"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.717823 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.719109 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.721826 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.721885 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.721912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.721947 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722029 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hgqf\" (UniqueName: \"kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722175 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722224 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7jg\" (UniqueName: \"kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722253 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722278 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722285 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722528 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722580 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722609 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4vt\" (UniqueName: \"kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722627 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722665 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722680 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.722719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.723470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.727965 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.729461 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.739057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.746227 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7jg\" (UniqueName: \"kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.748359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.791539 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.823494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hgqf\" (UniqueName: \"kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824011 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824042 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824062 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824111 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4vt\" (UniqueName: \"kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824127 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824170 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824185 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824216 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.824231 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.825309 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.825845 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.827349 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.833352 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.833423 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.837498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.844021 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.847037 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.847888 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4vt\" (UniqueName: \"kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt\") pod \"dnsmasq-dns-745b9ddc8c-c77tc\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.853865 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hgqf\" (UniqueName: \"kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf\") pod \"placement-db-sync-8vb4z\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.880445 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8vb4z" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.900386 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:45:55 crc kubenswrapper[5024]: I1007 12:45:55.937153 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:45:56 crc kubenswrapper[5024]: I1007 12:45:56.262429 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:45:56 crc kubenswrapper[5024]: I1007 12:45:56.404849 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:45:56 crc kubenswrapper[5024]: I1007 12:45:56.415755 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8vb4z"] Oct 07 12:45:56 crc kubenswrapper[5024]: W1007 12:45:56.417929 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230742cc_7316_4b6b_8331_5a0352b4ebcb.slice/crio-69bf01fbba5cf3b5d2b81f1cae5d883e2b4f84f360add6190b3a4c159f08d8c4 WatchSource:0}: Error finding container 69bf01fbba5cf3b5d2b81f1cae5d883e2b4f84f360add6190b3a4c159f08d8c4: Status 404 returned error can't find the container with id 69bf01fbba5cf3b5d2b81f1cae5d883e2b4f84f360add6190b3a4c159f08d8c4 Oct 07 12:45:56 crc kubenswrapper[5024]: I1007 12:45:56.531506 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhprc"] Oct 07 12:45:56 crc kubenswrapper[5024]: I1007 12:45:56.609704 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.030021 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerStarted","Data":"69bf01fbba5cf3b5d2b81f1cae5d883e2b4f84f360add6190b3a4c159f08d8c4"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.031602 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhprc" event={"ID":"8be45da7-a22a-42cf-95cd-4dafa5f4b92b","Type":"ContainerStarted","Data":"619c6053083c7a7e406e8ef90fb4622b9bb6cda1a3cd16ae49b7cbca0949ba31"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.033151 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8vb4z" event={"ID":"dddaa23e-2e38-4835-a311-69a6e7ef3c16","Type":"ContainerStarted","Data":"8d874a389b6018e286ed997c5254eed61336d6564c3602aa6ba38331a11bad91"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.034620 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" event={"ID":"12a03ea3-aba5-4ae0-b494-6c96cc221d03","Type":"ContainerStarted","Data":"2928e948b8f1c3950d2481695f2d0398db5ce4f87e2999a23bd767512bd23370"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.036331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" event={"ID":"26c7bebc-2225-4d65-a8fc-7e3734305ba2","Type":"ContainerStarted","Data":"ee897e5526b746b61721a2daf69d773f15e3b642ac4c65c8c83fcc40ae4e7403"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.036356 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" event={"ID":"26c7bebc-2225-4d65-a8fc-7e3734305ba2","Type":"ContainerStarted","Data":"d4e0e97af777319b76f144521900a95b1e09894921b87da1db1e0044f62c99f0"} Oct 07 12:45:57 crc kubenswrapper[5024]: I1007 12:45:57.173281 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.050634 5024 generic.go:334] "Generic (PLEG): container finished" podID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerID="643bb3bb8da669b60ac194133a40cfe326531afd37a14512629f1d6815e678c1" exitCode=0 Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.050745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" event={"ID":"12a03ea3-aba5-4ae0-b494-6c96cc221d03","Type":"ContainerDied","Data":"643bb3bb8da669b60ac194133a40cfe326531afd37a14512629f1d6815e678c1"} Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.055149 5024 generic.go:334] "Generic (PLEG): container finished" podID="26c7bebc-2225-4d65-a8fc-7e3734305ba2" containerID="ee897e5526b746b61721a2daf69d773f15e3b642ac4c65c8c83fcc40ae4e7403" exitCode=0 Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.055410 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" event={"ID":"26c7bebc-2225-4d65-a8fc-7e3734305ba2","Type":"ContainerDied","Data":"ee897e5526b746b61721a2daf69d773f15e3b642ac4c65c8c83fcc40ae4e7403"} Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.080556 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhprc" event={"ID":"8be45da7-a22a-42cf-95cd-4dafa5f4b92b","Type":"ContainerStarted","Data":"6902d138bb396eac85623e13d1fbf4cbf89103d17aa20f39618feaee6ce7711f"} Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.136407 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hhprc" podStartSLOduration=3.136383205 podStartE2EDuration="3.136383205s" podCreationTimestamp="2025-10-07 12:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:45:58.132503041 +0000 UTC m=+1096.208289879" watchObservedRunningTime="2025-10-07 12:45:58.136383205 +0000 UTC m=+1096.212170043" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.444987 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.583783 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.584198 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.584433 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdklw\" (UniqueName: \"kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.585348 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.585520 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.594507 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw" (OuterVolumeSpecName: "kube-api-access-jdklw") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2"). InnerVolumeSpecName "kube-api-access-jdklw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.608806 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config" (OuterVolumeSpecName: "config") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:58 crc kubenswrapper[5024]: E1007 12:45:58.610301 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb podName:26c7bebc-2225-4d65-a8fc-7e3734305ba2 nodeName:}" failed. No retries permitted until 2025-10-07 12:45:59.110267744 +0000 UTC m=+1097.186054602 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2") : error deleting /var/lib/kubelet/pods/26c7bebc-2225-4d65-a8fc-7e3734305ba2/volume-subpaths: remove /var/lib/kubelet/pods/26c7bebc-2225-4d65-a8fc-7e3734305ba2/volume-subpaths: no such file or directory Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.610582 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.610796 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.687075 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.687117 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdklw\" (UniqueName: \"kubernetes.io/projected/26c7bebc-2225-4d65-a8fc-7e3734305ba2-kube-api-access-jdklw\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.687128 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:58 crc kubenswrapper[5024]: I1007 12:45:58.687159 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.089160 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" event={"ID":"26c7bebc-2225-4d65-a8fc-7e3734305ba2","Type":"ContainerDied","Data":"d4e0e97af777319b76f144521900a95b1e09894921b87da1db1e0044f62c99f0"} Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.089203 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-nw9lx" Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.089233 5024 scope.go:117] "RemoveContainer" containerID="ee897e5526b746b61721a2daf69d773f15e3b642ac4c65c8c83fcc40ae4e7403" Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.195177 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") pod \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\" (UID: \"26c7bebc-2225-4d65-a8fc-7e3734305ba2\") " Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.195828 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26c7bebc-2225-4d65-a8fc-7e3734305ba2" (UID: "26c7bebc-2225-4d65-a8fc-7e3734305ba2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.196051 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26c7bebc-2225-4d65-a8fc-7e3734305ba2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.442687 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:45:59 crc kubenswrapper[5024]: I1007 12:45:59.452472 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-nw9lx"] Oct 07 12:46:00 crc kubenswrapper[5024]: I1007 12:46:00.761443 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c7bebc-2225-4d65-a8fc-7e3734305ba2" path="/var/lib/kubelet/pods/26c7bebc-2225-4d65-a8fc-7e3734305ba2/volumes" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.198837 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5c90-account-create-gb4jv"] Oct 07 12:46:01 crc kubenswrapper[5024]: E1007 12:46:01.199247 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c7bebc-2225-4d65-a8fc-7e3734305ba2" containerName="init" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.199268 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c7bebc-2225-4d65-a8fc-7e3734305ba2" containerName="init" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.199531 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c7bebc-2225-4d65-a8fc-7e3734305ba2" containerName="init" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.200152 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.201871 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.210830 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5c90-account-create-gb4jv"] Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.224617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hl9c\" (UniqueName: \"kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c\") pod \"barbican-5c90-account-create-gb4jv\" (UID: \"2520e994-c7f7-4439-9c46-8398e1b55cf8\") " pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.300719 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e88c-account-create-pdvhd"] Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.301785 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.304225 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.308420 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e88c-account-create-pdvhd"] Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.326080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hl9c\" (UniqueName: \"kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c\") pod \"barbican-5c90-account-create-gb4jv\" (UID: \"2520e994-c7f7-4439-9c46-8398e1b55cf8\") " pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.326168 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvht\" (UniqueName: \"kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht\") pod \"cinder-e88c-account-create-pdvhd\" (UID: \"151cc44b-5fbc-401a-81b1-b65ffa4b85b1\") " pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.350653 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hl9c\" (UniqueName: \"kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c\") pod \"barbican-5c90-account-create-gb4jv\" (UID: \"2520e994-c7f7-4439-9c46-8398e1b55cf8\") " pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.427973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhvht\" (UniqueName: \"kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht\") pod \"cinder-e88c-account-create-pdvhd\" (UID: \"151cc44b-5fbc-401a-81b1-b65ffa4b85b1\") " pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.445870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhvht\" (UniqueName: \"kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht\") pod \"cinder-e88c-account-create-pdvhd\" (UID: \"151cc44b-5fbc-401a-81b1-b65ffa4b85b1\") " pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.522175 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.602796 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f874-account-create-5s8rq"] Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.604259 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.606845 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.609741 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f874-account-create-5s8rq"] Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.622050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.630516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvcn\" (UniqueName: \"kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn\") pod \"neutron-f874-account-create-5s8rq\" (UID: \"c35bee31-8ade-4f6d-b6fe-35d989d2e251\") " pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.732225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvcn\" (UniqueName: \"kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn\") pod \"neutron-f874-account-create-5s8rq\" (UID: \"c35bee31-8ade-4f6d-b6fe-35d989d2e251\") " pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.757885 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvcn\" (UniqueName: \"kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn\") pod \"neutron-f874-account-create-5s8rq\" (UID: \"c35bee31-8ade-4f6d-b6fe-35d989d2e251\") " pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.983500 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:01 crc kubenswrapper[5024]: I1007 12:46:01.988798 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5c90-account-create-gb4jv"] Oct 07 12:46:02 crc kubenswrapper[5024]: W1007 12:46:02.000693 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2520e994_c7f7_4439_9c46_8398e1b55cf8.slice/crio-c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a WatchSource:0}: Error finding container c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a: Status 404 returned error can't find the container with id c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a Oct 07 12:46:02 crc kubenswrapper[5024]: I1007 12:46:02.096794 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e88c-account-create-pdvhd"] Oct 07 12:46:02 crc kubenswrapper[5024]: I1007 12:46:02.119747 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5c90-account-create-gb4jv" event={"ID":"2520e994-c7f7-4439-9c46-8398e1b55cf8","Type":"ContainerStarted","Data":"c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a"} Oct 07 12:46:02 crc kubenswrapper[5024]: I1007 12:46:02.122884 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e88c-account-create-pdvhd" event={"ID":"151cc44b-5fbc-401a-81b1-b65ffa4b85b1","Type":"ContainerStarted","Data":"862bc190a354a8e8f8d72a7b96d7b4acfef6b7593c39a202d330eff51001fdf1"} Oct 07 12:46:02 crc kubenswrapper[5024]: I1007 12:46:02.408596 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f874-account-create-5s8rq"] Oct 07 12:46:02 crc kubenswrapper[5024]: W1007 12:46:02.414773 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc35bee31_8ade_4f6d_b6fe_35d989d2e251.slice/crio-035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a WatchSource:0}: Error finding container 035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a: Status 404 returned error can't find the container with id 035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a Oct 07 12:46:03 crc kubenswrapper[5024]: I1007 12:46:03.131044 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f874-account-create-5s8rq" event={"ID":"c35bee31-8ade-4f6d-b6fe-35d989d2e251","Type":"ContainerStarted","Data":"035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a"} Oct 07 12:46:03 crc kubenswrapper[5024]: I1007 12:46:03.133393 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5c90-account-create-gb4jv" event={"ID":"2520e994-c7f7-4439-9c46-8398e1b55cf8","Type":"ContainerStarted","Data":"2d242175b162f1346a0e2e34906a9f49ca7bea5f79251df561f25100b0b0baa8"} Oct 07 12:46:04 crc kubenswrapper[5024]: I1007 12:46:04.141255 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f874-account-create-5s8rq" event={"ID":"c35bee31-8ade-4f6d-b6fe-35d989d2e251","Type":"ContainerStarted","Data":"9640940a21c8d739424e837add2332f9e9ed19c44468aa67fbd0522f3382b5be"} Oct 07 12:46:04 crc kubenswrapper[5024]: I1007 12:46:04.143996 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e88c-account-create-pdvhd" event={"ID":"151cc44b-5fbc-401a-81b1-b65ffa4b85b1","Type":"ContainerStarted","Data":"e475246913a93bc814397b24b01b47361d58cb4c482c0a8636b160bbbf029756"} Oct 07 12:46:04 crc kubenswrapper[5024]: I1007 12:46:04.160047 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f874-account-create-5s8rq" podStartSLOduration=3.160017234 podStartE2EDuration="3.160017234s" podCreationTimestamp="2025-10-07 12:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:04.156726087 +0000 UTC m=+1102.232512925" watchObservedRunningTime="2025-10-07 12:46:04.160017234 +0000 UTC m=+1102.235804112" Oct 07 12:46:04 crc kubenswrapper[5024]: I1007 12:46:04.187311 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5c90-account-create-gb4jv" podStartSLOduration=3.187279884 podStartE2EDuration="3.187279884s" podCreationTimestamp="2025-10-07 12:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:04.177380293 +0000 UTC m=+1102.253167211" watchObservedRunningTime="2025-10-07 12:46:04.187279884 +0000 UTC m=+1102.263066772" Oct 07 12:46:04 crc kubenswrapper[5024]: I1007 12:46:04.204042 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e88c-account-create-pdvhd" podStartSLOduration=3.204015865 podStartE2EDuration="3.204015865s" podCreationTimestamp="2025-10-07 12:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:04.193196817 +0000 UTC m=+1102.268983745" watchObservedRunningTime="2025-10-07 12:46:04.204015865 +0000 UTC m=+1102.279802733" Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.204520 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" event={"ID":"12a03ea3-aba5-4ae0-b494-6c96cc221d03","Type":"ContainerStarted","Data":"a1ef91e314a58792ea367d5d9e6b03190fb8f98df4e87c9055051b0e41a4e102"} Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.205071 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.207643 5024 generic.go:334] "Generic (PLEG): container finished" podID="c35bee31-8ade-4f6d-b6fe-35d989d2e251" containerID="9640940a21c8d739424e837add2332f9e9ed19c44468aa67fbd0522f3382b5be" exitCode=0 Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.207656 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f874-account-create-5s8rq" event={"ID":"c35bee31-8ade-4f6d-b6fe-35d989d2e251","Type":"ContainerDied","Data":"9640940a21c8d739424e837add2332f9e9ed19c44468aa67fbd0522f3382b5be"} Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.210291 5024 generic.go:334] "Generic (PLEG): container finished" podID="2520e994-c7f7-4439-9c46-8398e1b55cf8" containerID="2d242175b162f1346a0e2e34906a9f49ca7bea5f79251df561f25100b0b0baa8" exitCode=0 Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.210343 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5c90-account-create-gb4jv" event={"ID":"2520e994-c7f7-4439-9c46-8398e1b55cf8","Type":"ContainerDied","Data":"2d242175b162f1346a0e2e34906a9f49ca7bea5f79251df561f25100b0b0baa8"} Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.213268 5024 generic.go:334] "Generic (PLEG): container finished" podID="151cc44b-5fbc-401a-81b1-b65ffa4b85b1" containerID="e475246913a93bc814397b24b01b47361d58cb4c482c0a8636b160bbbf029756" exitCode=0 Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.213340 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e88c-account-create-pdvhd" event={"ID":"151cc44b-5fbc-401a-81b1-b65ffa4b85b1","Type":"ContainerDied","Data":"e475246913a93bc814397b24b01b47361d58cb4c482c0a8636b160bbbf029756"} Oct 07 12:46:08 crc kubenswrapper[5024]: I1007 12:46:08.228265 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" podStartSLOduration=13.228245254 podStartE2EDuration="13.228245254s" podCreationTimestamp="2025-10-07 12:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:08.223858285 +0000 UTC m=+1106.299645133" watchObservedRunningTime="2025-10-07 12:46:08.228245254 +0000 UTC m=+1106.304032092" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.240004 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f874-account-create-5s8rq" event={"ID":"c35bee31-8ade-4f6d-b6fe-35d989d2e251","Type":"ContainerDied","Data":"035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a"} Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.241274 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035783f205822fd6b825c78216c6ca77666afa653f295532fb7cd2f57ba86a5a" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.313050 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.393790 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvcn\" (UniqueName: \"kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn\") pod \"c35bee31-8ade-4f6d-b6fe-35d989d2e251\" (UID: \"c35bee31-8ade-4f6d-b6fe-35d989d2e251\") " Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.411160 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn" (OuterVolumeSpecName: "kube-api-access-6cvcn") pod "c35bee31-8ade-4f6d-b6fe-35d989d2e251" (UID: "c35bee31-8ade-4f6d-b6fe-35d989d2e251"). InnerVolumeSpecName "kube-api-access-6cvcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.496913 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvcn\" (UniqueName: \"kubernetes.io/projected/c35bee31-8ade-4f6d-b6fe-35d989d2e251-kube-api-access-6cvcn\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.835176 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.904548 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhvht\" (UniqueName: \"kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht\") pod \"151cc44b-5fbc-401a-81b1-b65ffa4b85b1\" (UID: \"151cc44b-5fbc-401a-81b1-b65ffa4b85b1\") " Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.907619 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht" (OuterVolumeSpecName: "kube-api-access-dhvht") pod "151cc44b-5fbc-401a-81b1-b65ffa4b85b1" (UID: "151cc44b-5fbc-401a-81b1-b65ffa4b85b1"). InnerVolumeSpecName "kube-api-access-dhvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:10 crc kubenswrapper[5024]: I1007 12:46:10.941266 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.006316 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hl9c\" (UniqueName: \"kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c\") pod \"2520e994-c7f7-4439-9c46-8398e1b55cf8\" (UID: \"2520e994-c7f7-4439-9c46-8398e1b55cf8\") " Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.007350 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhvht\" (UniqueName: \"kubernetes.io/projected/151cc44b-5fbc-401a-81b1-b65ffa4b85b1-kube-api-access-dhvht\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.011586 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c" (OuterVolumeSpecName: "kube-api-access-5hl9c") pod "2520e994-c7f7-4439-9c46-8398e1b55cf8" (UID: "2520e994-c7f7-4439-9c46-8398e1b55cf8"). InnerVolumeSpecName "kube-api-access-5hl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.108705 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hl9c\" (UniqueName: \"kubernetes.io/projected/2520e994-c7f7-4439-9c46-8398e1b55cf8-kube-api-access-5hl9c\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.249351 5024 generic.go:334] "Generic (PLEG): container finished" podID="36b57d44-185e-4645-9078-4deb8da00531" containerID="c8bb8b6c3282f6fb85e0525e2327bf96b3267d635d8e8cf466c9ce6e2a5e7ba4" exitCode=0 Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.249427 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cvp9t" event={"ID":"36b57d44-185e-4645-9078-4deb8da00531","Type":"ContainerDied","Data":"c8bb8b6c3282f6fb85e0525e2327bf96b3267d635d8e8cf466c9ce6e2a5e7ba4"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.252256 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5c90-account-create-gb4jv" event={"ID":"2520e994-c7f7-4439-9c46-8398e1b55cf8","Type":"ContainerDied","Data":"c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.252296 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5c90-account-create-gb4jv" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.252296 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c287745c5bcd146a404249d3a0f868f2c2fba424ebbcd51e02bc19ebd2c4cf2a" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.254001 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerStarted","Data":"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.256358 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e88c-account-create-pdvhd" event={"ID":"151cc44b-5fbc-401a-81b1-b65ffa4b85b1","Type":"ContainerDied","Data":"862bc190a354a8e8f8d72a7b96d7b4acfef6b7593c39a202d330eff51001fdf1"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.256385 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862bc190a354a8e8f8d72a7b96d7b4acfef6b7593c39a202d330eff51001fdf1" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.256423 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e88c-account-create-pdvhd" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.258914 5024 generic.go:334] "Generic (PLEG): container finished" podID="8be45da7-a22a-42cf-95cd-4dafa5f4b92b" containerID="6902d138bb396eac85623e13d1fbf4cbf89103d17aa20f39618feaee6ce7711f" exitCode=0 Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.258994 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhprc" event={"ID":"8be45da7-a22a-42cf-95cd-4dafa5f4b92b","Type":"ContainerDied","Data":"6902d138bb396eac85623e13d1fbf4cbf89103d17aa20f39618feaee6ce7711f"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.260537 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f874-account-create-5s8rq" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.261456 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8vb4z" event={"ID":"dddaa23e-2e38-4835-a311-69a6e7ef3c16","Type":"ContainerStarted","Data":"d8928a2ab208f1c85e289020d0f03678bb13dfa7df8cc55901b893db2cb45c31"} Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.325002 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8vb4z" podStartSLOduration=1.999001132 podStartE2EDuration="16.324973897s" podCreationTimestamp="2025-10-07 12:45:55 +0000 UTC" firstStartedPulling="2025-10-07 12:45:56.425456643 +0000 UTC m=+1094.501243481" lastFinishedPulling="2025-10-07 12:46:10.751429408 +0000 UTC m=+1108.827216246" observedRunningTime="2025-10-07 12:46:11.313575443 +0000 UTC m=+1109.389362291" watchObservedRunningTime="2025-10-07 12:46:11.324973897 +0000 UTC m=+1109.400760745" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.885349 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gtjxh"] Oct 07 12:46:11 crc kubenswrapper[5024]: E1007 12:46:11.886207 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2520e994-c7f7-4439-9c46-8398e1b55cf8" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886225 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2520e994-c7f7-4439-9c46-8398e1b55cf8" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: E1007 12:46:11.886250 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35bee31-8ade-4f6d-b6fe-35d989d2e251" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886256 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35bee31-8ade-4f6d-b6fe-35d989d2e251" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: E1007 12:46:11.886273 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151cc44b-5fbc-401a-81b1-b65ffa4b85b1" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886279 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="151cc44b-5fbc-401a-81b1-b65ffa4b85b1" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886488 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="151cc44b-5fbc-401a-81b1-b65ffa4b85b1" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886518 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2520e994-c7f7-4439-9c46-8398e1b55cf8" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.886537 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35bee31-8ade-4f6d-b6fe-35d989d2e251" containerName="mariadb-account-create" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.887174 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.892691 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.893237 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.893497 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jkdr9" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.909309 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gtjxh"] Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.925675 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.926087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twdz\" (UniqueName: \"kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:11 crc kubenswrapper[5024]: I1007 12:46:11.926989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.039425 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6twdz\" (UniqueName: \"kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.039501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.039561 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.047248 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.055820 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.055957 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6twdz\" (UniqueName: \"kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz\") pod \"neutron-db-sync-gtjxh\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.227410 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.786097 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.836121 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cvp9t" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872187 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872259 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872339 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww6lh\" (UniqueName: \"kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872391 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle\") pod \"36b57d44-185e-4645-9078-4deb8da00531\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872548 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbbh6\" (UniqueName: \"kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6\") pod \"36b57d44-185e-4645-9078-4deb8da00531\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872625 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872706 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data\") pod \"36b57d44-185e-4645-9078-4deb8da00531\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872787 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data\") pod \"36b57d44-185e-4645-9078-4deb8da00531\" (UID: \"36b57d44-185e-4645-9078-4deb8da00531\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.872820 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys\") pod \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\" (UID: \"8be45da7-a22a-42cf-95cd-4dafa5f4b92b\") " Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.903870 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6" (OuterVolumeSpecName: "kube-api-access-rbbh6") pod "36b57d44-185e-4645-9078-4deb8da00531" (UID: "36b57d44-185e-4645-9078-4deb8da00531"). InnerVolumeSpecName "kube-api-access-rbbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.905383 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh" (OuterVolumeSpecName: "kube-api-access-ww6lh") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "kube-api-access-ww6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.924479 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "36b57d44-185e-4645-9078-4deb8da00531" (UID: "36b57d44-185e-4645-9078-4deb8da00531"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.935331 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.935438 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts" (OuterVolumeSpecName: "scripts") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.938398 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.963085 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gtjxh"] Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975534 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww6lh\" (UniqueName: \"kubernetes.io/projected/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-kube-api-access-ww6lh\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975573 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbbh6\" (UniqueName: \"kubernetes.io/projected/36b57d44-185e-4645-9078-4deb8da00531-kube-api-access-rbbh6\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975587 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975597 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975607 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.975616 5024 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:12 crc kubenswrapper[5024]: I1007 12:46:12.988313 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.013206 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data" (OuterVolumeSpecName: "config-data") pod "36b57d44-185e-4645-9078-4deb8da00531" (UID: "36b57d44-185e-4645-9078-4deb8da00531"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.013725 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36b57d44-185e-4645-9078-4deb8da00531" (UID: "36b57d44-185e-4645-9078-4deb8da00531"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.019493 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data" (OuterVolumeSpecName: "config-data") pod "8be45da7-a22a-42cf-95cd-4dafa5f4b92b" (UID: "8be45da7-a22a-42cf-95cd-4dafa5f4b92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.078284 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.078325 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.078339 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be45da7-a22a-42cf-95cd-4dafa5f4b92b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.078354 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b57d44-185e-4645-9078-4deb8da00531-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.282410 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerStarted","Data":"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f"} Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.287578 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhprc" event={"ID":"8be45da7-a22a-42cf-95cd-4dafa5f4b92b","Type":"ContainerDied","Data":"619c6053083c7a7e406e8ef90fb4622b9bb6cda1a3cd16ae49b7cbca0949ba31"} Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.287603 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619c6053083c7a7e406e8ef90fb4622b9bb6cda1a3cd16ae49b7cbca0949ba31" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.287685 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhprc" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.314561 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gtjxh" event={"ID":"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3","Type":"ContainerStarted","Data":"db97de51cefbb6f8fec0daf7081808cce4d7c30ab6bf39d2459db02b7c29c4cd"} Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.314602 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gtjxh" event={"ID":"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3","Type":"ContainerStarted","Data":"2087fe11ac8c9e9bcb26f3439e65d0e5bfaac2d0212d14df334cebb333cc9ba9"} Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.330871 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gtjxh" podStartSLOduration=2.330851634 podStartE2EDuration="2.330851634s" podCreationTimestamp="2025-10-07 12:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:13.326905818 +0000 UTC m=+1111.402692656" watchObservedRunningTime="2025-10-07 12:46:13.330851634 +0000 UTC m=+1111.406638472" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.332830 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cvp9t" event={"ID":"36b57d44-185e-4645-9078-4deb8da00531","Type":"ContainerDied","Data":"eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a"} Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.332911 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba893d9568ed95f6fb6043c720821b7cfb1b0bf03d08f1b545f0bff02f6222a" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.333020 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cvp9t" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.409246 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hhprc"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.423735 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hhprc"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.508120 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9nmmq"] Oct 07 12:46:13 crc kubenswrapper[5024]: E1007 12:46:13.508836 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be45da7-a22a-42cf-95cd-4dafa5f4b92b" containerName="keystone-bootstrap" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.508852 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be45da7-a22a-42cf-95cd-4dafa5f4b92b" containerName="keystone-bootstrap" Oct 07 12:46:13 crc kubenswrapper[5024]: E1007 12:46:13.508870 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b57d44-185e-4645-9078-4deb8da00531" containerName="glance-db-sync" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.508879 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b57d44-185e-4645-9078-4deb8da00531" containerName="glance-db-sync" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.509061 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be45da7-a22a-42cf-95cd-4dafa5f4b92b" containerName="keystone-bootstrap" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.509073 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b57d44-185e-4645-9078-4deb8da00531" containerName="glance-db-sync" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.509657 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.513973 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.514156 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.514290 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtrr5" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.514436 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.524406 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9nmmq"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591637 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591710 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591757 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591821 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz84l\" (UniqueName: \"kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591859 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.591890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693541 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz84l\" (UniqueName: \"kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693682 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.693794 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.704347 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.704797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.718585 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.719785 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.721953 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.724354 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz84l\" (UniqueName: \"kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l\") pod \"keystone-bootstrap-9nmmq\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.774740 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.775029 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="dnsmasq-dns" containerID="cri-o://a1ef91e314a58792ea367d5d9e6b03190fb8f98df4e87c9055051b0e41a4e102" gracePeriod=10 Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.788325 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.810909 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.820882 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.825164 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.838678 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.898396 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.898477 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.898553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.898599 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:13 crc kubenswrapper[5024]: I1007 12:46:13.898625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xdn\" (UniqueName: \"kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.001858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.002380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.002423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.002444 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xdn\" (UniqueName: \"kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.002466 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.003371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.003894 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.004616 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.005127 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.032119 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xdn\" (UniqueName: \"kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn\") pod \"dnsmasq-dns-7987f74bbc-hsjwt\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.162046 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.374685 5024 generic.go:334] "Generic (PLEG): container finished" podID="dddaa23e-2e38-4835-a311-69a6e7ef3c16" containerID="d8928a2ab208f1c85e289020d0f03678bb13dfa7df8cc55901b893db2cb45c31" exitCode=0 Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.374721 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8vb4z" event={"ID":"dddaa23e-2e38-4835-a311-69a6e7ef3c16","Type":"ContainerDied","Data":"d8928a2ab208f1c85e289020d0f03678bb13dfa7df8cc55901b893db2cb45c31"} Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.377728 5024 generic.go:334] "Generic (PLEG): container finished" podID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerID="a1ef91e314a58792ea367d5d9e6b03190fb8f98df4e87c9055051b0e41a4e102" exitCode=0 Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.378697 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" event={"ID":"12a03ea3-aba5-4ae0-b494-6c96cc221d03","Type":"ContainerDied","Data":"a1ef91e314a58792ea367d5d9e6b03190fb8f98df4e87c9055051b0e41a4e102"} Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.378729 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" event={"ID":"12a03ea3-aba5-4ae0-b494-6c96cc221d03","Type":"ContainerDied","Data":"2928e948b8f1c3950d2481695f2d0398db5ce4f87e2999a23bd767512bd23370"} Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.378746 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2928e948b8f1c3950d2481695f2d0398db5ce4f87e2999a23bd767512bd23370" Oct 07 12:46:14 crc kubenswrapper[5024]: I1007 12:46:14.427324 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.571058 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9nmmq"] Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.622684 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config\") pod \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.622761 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb\") pod \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.622795 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc\") pod \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.622818 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp4vt\" (UniqueName: \"kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt\") pod \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.622948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb\") pod \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\" (UID: \"12a03ea3-aba5-4ae0-b494-6c96cc221d03\") " Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.629322 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt" (OuterVolumeSpecName: "kube-api-access-qp4vt") pod "12a03ea3-aba5-4ae0-b494-6c96cc221d03" (UID: "12a03ea3-aba5-4ae0-b494-6c96cc221d03"). InnerVolumeSpecName "kube-api-access-qp4vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.669196 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12a03ea3-aba5-4ae0-b494-6c96cc221d03" (UID: "12a03ea3-aba5-4ae0-b494-6c96cc221d03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.673700 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config" (OuterVolumeSpecName: "config") pod "12a03ea3-aba5-4ae0-b494-6c96cc221d03" (UID: "12a03ea3-aba5-4ae0-b494-6c96cc221d03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.678714 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12a03ea3-aba5-4ae0-b494-6c96cc221d03" (UID: "12a03ea3-aba5-4ae0-b494-6c96cc221d03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.680970 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12a03ea3-aba5-4ae0-b494-6c96cc221d03" (UID: "12a03ea3-aba5-4ae0-b494-6c96cc221d03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.724713 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.725350 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.725398 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.725410 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a03ea3-aba5-4ae0-b494-6c96cc221d03-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.725423 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp4vt\" (UniqueName: \"kubernetes.io/projected/12a03ea3-aba5-4ae0-b494-6c96cc221d03-kube-api-access-qp4vt\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:14.764637 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be45da7-a22a-42cf-95cd-4dafa5f4b92b" path="/var/lib/kubelet/pods/8be45da7-a22a-42cf-95cd-4dafa5f4b92b/volumes" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.390881 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9nmmq" event={"ID":"b4c25bea-71ea-4c21-9331-19b58c0fdd89","Type":"ContainerStarted","Data":"4dadeaecc87c1adc8f22f54a36e0c09d498e580e3a58b5502e09efa84f8e7f12"} Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.390918 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9nmmq" event={"ID":"b4c25bea-71ea-4c21-9331-19b58c0fdd89","Type":"ContainerStarted","Data":"dc7bb9a582fb8063029b5a6ffca2e1bdcae377e649bac8582893101387911ae0"} Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.391002 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-c77tc" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.425068 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9nmmq" podStartSLOduration=2.4250467909999998 podStartE2EDuration="2.425046791s" podCreationTimestamp="2025-10-07 12:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:15.411973907 +0000 UTC m=+1113.487760755" watchObservedRunningTime="2025-10-07 12:46:15.425046791 +0000 UTC m=+1113.500833619" Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.433999 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:46:15 crc kubenswrapper[5024]: I1007 12:46:15.443241 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-c77tc"] Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.106893 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.414841 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" event={"ID":"2274abe4-ee52-4816-ab0b-31774782dc36","Type":"ContainerStarted","Data":"b24b216831768f5bc05b341cd7166670fde940728aac04ca07e328cd24eba9da"} Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.421799 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8vb4z" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.424071 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8vb4z" event={"ID":"dddaa23e-2e38-4835-a311-69a6e7ef3c16","Type":"ContainerDied","Data":"8d874a389b6018e286ed997c5254eed61336d6564c3602aa6ba38331a11bad91"} Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.424119 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d874a389b6018e286ed997c5254eed61336d6564c3602aa6ba38331a11bad91" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.482616 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hgqf\" (UniqueName: \"kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf\") pod \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.482659 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts\") pod \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.482679 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle\") pod \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.482723 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data\") pod \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.482763 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs\") pod \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\" (UID: \"dddaa23e-2e38-4835-a311-69a6e7ef3c16\") " Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.485122 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs" (OuterVolumeSpecName: "logs") pod "dddaa23e-2e38-4835-a311-69a6e7ef3c16" (UID: "dddaa23e-2e38-4835-a311-69a6e7ef3c16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.486482 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dddaa23e-2e38-4835-a311-69a6e7ef3c16-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.492166 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts" (OuterVolumeSpecName: "scripts") pod "dddaa23e-2e38-4835-a311-69a6e7ef3c16" (UID: "dddaa23e-2e38-4835-a311-69a6e7ef3c16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.528809 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf" (OuterVolumeSpecName: "kube-api-access-5hgqf") pod "dddaa23e-2e38-4835-a311-69a6e7ef3c16" (UID: "dddaa23e-2e38-4835-a311-69a6e7ef3c16"). InnerVolumeSpecName "kube-api-access-5hgqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.574915 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dddaa23e-2e38-4835-a311-69a6e7ef3c16" (UID: "dddaa23e-2e38-4835-a311-69a6e7ef3c16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.586114 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data" (OuterVolumeSpecName: "config-data") pod "dddaa23e-2e38-4835-a311-69a6e7ef3c16" (UID: "dddaa23e-2e38-4835-a311-69a6e7ef3c16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.587826 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hgqf\" (UniqueName: \"kubernetes.io/projected/dddaa23e-2e38-4835-a311-69a6e7ef3c16-kube-api-access-5hgqf\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.587918 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.587972 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.588019 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddaa23e-2e38-4835-a311-69a6e7ef3c16-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.726922 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s85hg"] Oct 07 12:46:16 crc kubenswrapper[5024]: E1007 12:46:16.727611 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="dnsmasq-dns" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.727677 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="dnsmasq-dns" Oct 07 12:46:16 crc kubenswrapper[5024]: E1007 12:46:16.727745 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddaa23e-2e38-4835-a311-69a6e7ef3c16" containerName="placement-db-sync" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.727789 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddaa23e-2e38-4835-a311-69a6e7ef3c16" containerName="placement-db-sync" Oct 07 12:46:16 crc kubenswrapper[5024]: E1007 12:46:16.727861 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="init" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.727909 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="init" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.728110 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" containerName="dnsmasq-dns" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.728191 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddaa23e-2e38-4835-a311-69a6e7ef3c16" containerName="placement-db-sync" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.728777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.732053 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.732372 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rjp4t" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.738883 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pzbf6"] Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.740238 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.745112 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.745498 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5pkz6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.745706 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.775068 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a03ea3-aba5-4ae0-b494-6c96cc221d03" path="/var/lib/kubelet/pods/12a03ea3-aba5-4ae0-b494-6c96cc221d03/volumes" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.778909 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s85hg"] Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.778950 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pzbf6"] Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.794853 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8bh\" (UniqueName: \"kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.794983 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.795056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55b7v\" (UniqueName: \"kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.795176 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.795286 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.795988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.796038 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.796098 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.796120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.897907 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55b7v\" (UniqueName: \"kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.897991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898230 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8bh\" (UniqueName: \"kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.898388 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.906312 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.906852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.913280 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.918644 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.918647 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.919097 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.923046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55b7v\" (UniqueName: \"kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v\") pod \"cinder-db-sync-pzbf6\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:16 crc kubenswrapper[5024]: I1007 12:46:16.928095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8bh\" (UniqueName: \"kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh\") pod \"barbican-db-sync-s85hg\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.051857 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.062511 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.436485 5024 generic.go:334] "Generic (PLEG): container finished" podID="2274abe4-ee52-4816-ab0b-31774782dc36" containerID="ef323b81749f59b5c2bb55a30be5ba51d994c1b2b50eb62b4a6573729e7b6f2d" exitCode=0 Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.436901 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8vb4z" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.437974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" event={"ID":"2274abe4-ee52-4816-ab0b-31774782dc36","Type":"ContainerDied","Data":"ef323b81749f59b5c2bb55a30be5ba51d994c1b2b50eb62b4a6573729e7b6f2d"} Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.552605 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6dccff77b6-pr5gt"] Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.553848 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.558262 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.558479 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.558633 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.558644 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.558939 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-szxzk" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.588368 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dccff77b6-pr5gt"] Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.617994 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg87p\" (UniqueName: \"kubernetes.io/projected/a10c29d5-3a80-417b-90ec-9c2da32f3de0-kube-api-access-wg87p\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618057 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-internal-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618082 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-scripts\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618318 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10c29d5-3a80-417b-90ec-9c2da32f3de0-logs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618345 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-public-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618402 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-config-data\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.618448 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-combined-ca-bundle\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720646 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg87p\" (UniqueName: \"kubernetes.io/projected/a10c29d5-3a80-417b-90ec-9c2da32f3de0-kube-api-access-wg87p\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720712 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-internal-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-scripts\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720792 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10c29d5-3a80-417b-90ec-9c2da32f3de0-logs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720813 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-public-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720868 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-config-data\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.720894 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-combined-ca-bundle\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.721623 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a10c29d5-3a80-417b-90ec-9c2da32f3de0-logs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.725707 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-scripts\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.725735 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-internal-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.726031 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-config-data\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.726317 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-combined-ca-bundle\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.732848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10c29d5-3a80-417b-90ec-9c2da32f3de0-public-tls-certs\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.743445 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg87p\" (UniqueName: \"kubernetes.io/projected/a10c29d5-3a80-417b-90ec-9c2da32f3de0-kube-api-access-wg87p\") pod \"placement-6dccff77b6-pr5gt\" (UID: \"a10c29d5-3a80-417b-90ec-9c2da32f3de0\") " pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:17 crc kubenswrapper[5024]: I1007 12:46:17.877010 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:19 crc kubenswrapper[5024]: I1007 12:46:19.488966 5024 generic.go:334] "Generic (PLEG): container finished" podID="b4c25bea-71ea-4c21-9331-19b58c0fdd89" containerID="4dadeaecc87c1adc8f22f54a36e0c09d498e580e3a58b5502e09efa84f8e7f12" exitCode=0 Oct 07 12:46:19 crc kubenswrapper[5024]: I1007 12:46:19.489047 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9nmmq" event={"ID":"b4c25bea-71ea-4c21-9331-19b58c0fdd89","Type":"ContainerDied","Data":"4dadeaecc87c1adc8f22f54a36e0c09d498e580e3a58b5502e09efa84f8e7f12"} Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.414575 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pzbf6"] Oct 07 12:46:20 crc kubenswrapper[5024]: W1007 12:46:20.426489 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac3ad2e_791b_4133_8417_61c5465da6ea.slice/crio-f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a WatchSource:0}: Error finding container f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a: Status 404 returned error can't find the container with id f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.498798 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pzbf6" event={"ID":"2ac3ad2e-791b-4133-8417-61c5465da6ea","Type":"ContainerStarted","Data":"f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a"} Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.502528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerStarted","Data":"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a"} Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.504758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" event={"ID":"2274abe4-ee52-4816-ab0b-31774782dc36","Type":"ContainerStarted","Data":"4271e11fa3c3ebd964dfbca7cf605efc900fb2f27d0cca2395b83e18eff47da7"} Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.523303 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" podStartSLOduration=7.523284163 podStartE2EDuration="7.523284163s" podCreationTimestamp="2025-10-07 12:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:20.522953803 +0000 UTC m=+1118.598740641" watchObservedRunningTime="2025-10-07 12:46:20.523284163 +0000 UTC m=+1118.599071001" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.659460 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dccff77b6-pr5gt"] Oct 07 12:46:20 crc kubenswrapper[5024]: W1007 12:46:20.669916 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10c29d5_3a80_417b_90ec_9c2da32f3de0.slice/crio-c5786318aa53d4c6659e46fc60862ca70b25eb3636bbb01dd4496ff940674d4f WatchSource:0}: Error finding container c5786318aa53d4c6659e46fc60862ca70b25eb3636bbb01dd4496ff940674d4f: Status 404 returned error can't find the container with id c5786318aa53d4c6659e46fc60862ca70b25eb3636bbb01dd4496ff940674d4f Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.748145 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s85hg"] Oct 07 12:46:20 crc kubenswrapper[5024]: W1007 12:46:20.758793 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2f43bf9_8914_4def_a454_a4e5bd3d843b.slice/crio-e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64 WatchSource:0}: Error finding container e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64: Status 404 returned error can't find the container with id e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64 Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.812743 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.892886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.893000 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.893050 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.893096 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.893117 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz84l\" (UniqueName: \"kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.893160 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data\") pod \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\" (UID: \"b4c25bea-71ea-4c21-9331-19b58c0fdd89\") " Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.898574 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.899233 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l" (OuterVolumeSpecName: "kube-api-access-cz84l") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "kube-api-access-cz84l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.904086 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.904228 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts" (OuterVolumeSpecName: "scripts") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.920939 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data" (OuterVolumeSpecName: "config-data") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.929288 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c25bea-71ea-4c21-9331-19b58c0fdd89" (UID: "b4c25bea-71ea-4c21-9331-19b58c0fdd89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995789 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995827 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995836 5024 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995846 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995855 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz84l\" (UniqueName: \"kubernetes.io/projected/b4c25bea-71ea-4c21-9331-19b58c0fdd89-kube-api-access-cz84l\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:20 crc kubenswrapper[5024]: I1007 12:46:20.995865 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c25bea-71ea-4c21-9331-19b58c0fdd89-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.516058 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9nmmq" event={"ID":"b4c25bea-71ea-4c21-9331-19b58c0fdd89","Type":"ContainerDied","Data":"dc7bb9a582fb8063029b5a6ffca2e1bdcae377e649bac8582893101387911ae0"} Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.516100 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc7bb9a582fb8063029b5a6ffca2e1bdcae377e649bac8582893101387911ae0" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.516110 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9nmmq" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.517899 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s85hg" event={"ID":"a2f43bf9-8914-4def-a454-a4e5bd3d843b","Type":"ContainerStarted","Data":"e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64"} Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524458 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dccff77b6-pr5gt" event={"ID":"a10c29d5-3a80-417b-90ec-9c2da32f3de0","Type":"ContainerStarted","Data":"cbc9b20d621ed115fed6feeae5cae954f9b800a4777ce0b19b3e45d169cdf521"} Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524510 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524525 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524536 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dccff77b6-pr5gt" event={"ID":"a10c29d5-3a80-417b-90ec-9c2da32f3de0","Type":"ContainerStarted","Data":"ad05db4db03b6fc0d4ec9782e99397640eb98c3f59246c5d8c37320647425d1e"} Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524546 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dccff77b6-pr5gt" event={"ID":"a10c29d5-3a80-417b-90ec-9c2da32f3de0","Type":"ContainerStarted","Data":"c5786318aa53d4c6659e46fc60862ca70b25eb3636bbb01dd4496ff940674d4f"} Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.524561 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.573296 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6dccff77b6-pr5gt" podStartSLOduration=4.573273121 podStartE2EDuration="4.573273121s" podCreationTimestamp="2025-10-07 12:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:21.548889455 +0000 UTC m=+1119.624676293" watchObservedRunningTime="2025-10-07 12:46:21.573273121 +0000 UTC m=+1119.649059979" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.595703 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8dff8cf4-cpjpz"] Oct 07 12:46:21 crc kubenswrapper[5024]: E1007 12:46:21.596026 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c25bea-71ea-4c21-9331-19b58c0fdd89" containerName="keystone-bootstrap" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.596041 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c25bea-71ea-4c21-9331-19b58c0fdd89" containerName="keystone-bootstrap" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.596257 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c25bea-71ea-4c21-9331-19b58c0fdd89" containerName="keystone-bootstrap" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.596801 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.600185 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mtrr5" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.600379 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.600479 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.600715 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.601007 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.602701 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.610504 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8dff8cf4-cpjpz"] Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708165 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-scripts\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708506 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-fernet-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708545 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncs5r\" (UniqueName: \"kubernetes.io/projected/ea15930e-0c3f-421a-a9b8-a82399fa4e93-kube-api-access-ncs5r\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708576 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-public-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708609 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-internal-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-credential-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-config-data\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.708901 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-combined-ca-bundle\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810485 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-scripts\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810544 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-fernet-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810585 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncs5r\" (UniqueName: \"kubernetes.io/projected/ea15930e-0c3f-421a-a9b8-a82399fa4e93-kube-api-access-ncs5r\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810614 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-public-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810632 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-internal-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810655 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-credential-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810691 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-config-data\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.810748 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-combined-ca-bundle\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.816896 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-credential-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.816960 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-internal-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.817163 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-scripts\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.817185 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-config-data\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.817543 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-public-tls-certs\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.817697 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-combined-ca-bundle\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.817809 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea15930e-0c3f-421a-a9b8-a82399fa4e93-fernet-keys\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.830190 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncs5r\" (UniqueName: \"kubernetes.io/projected/ea15930e-0c3f-421a-a9b8-a82399fa4e93-kube-api-access-ncs5r\") pod \"keystone-d8dff8cf4-cpjpz\" (UID: \"ea15930e-0c3f-421a-a9b8-a82399fa4e93\") " pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:21 crc kubenswrapper[5024]: I1007 12:46:21.918826 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:22 crc kubenswrapper[5024]: I1007 12:46:22.569059 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8dff8cf4-cpjpz"] Oct 07 12:46:23 crc kubenswrapper[5024]: I1007 12:46:23.553464 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8dff8cf4-cpjpz" event={"ID":"ea15930e-0c3f-421a-a9b8-a82399fa4e93","Type":"ContainerStarted","Data":"c1f5669971a3990bac2727bb8b6f703fd4c25554a169b9923be7cc828059dc5a"} Oct 07 12:46:24 crc kubenswrapper[5024]: I1007 12:46:24.564170 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8dff8cf4-cpjpz" event={"ID":"ea15930e-0c3f-421a-a9b8-a82399fa4e93","Type":"ContainerStarted","Data":"ff2b9f7d62671cbecdee9ee890b931b607d4133fc58f7932f758c5eeb2face73"} Oct 07 12:46:24 crc kubenswrapper[5024]: I1007 12:46:24.564334 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:24 crc kubenswrapper[5024]: I1007 12:46:24.586508 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d8dff8cf4-cpjpz" podStartSLOduration=3.586487964 podStartE2EDuration="3.586487964s" podCreationTimestamp="2025-10-07 12:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:24.583613119 +0000 UTC m=+1122.659399957" watchObservedRunningTime="2025-10-07 12:46:24.586487964 +0000 UTC m=+1122.662274792" Oct 07 12:46:29 crc kubenswrapper[5024]: I1007 12:46:29.163325 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:29 crc kubenswrapper[5024]: I1007 12:46:29.231799 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:46:29 crc kubenswrapper[5024]: I1007 12:46:29.232403 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" containerID="cri-o://80ed6b1ba1f0ec6c4b86244276aeb558d15816b3bfc58b2774d8a5710719b4b3" gracePeriod=10 Oct 07 12:46:29 crc kubenswrapper[5024]: I1007 12:46:29.614182 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" event={"ID":"347553c3-a0ad-42c8-9923-5b17ba77a7a6","Type":"ContainerDied","Data":"80ed6b1ba1f0ec6c4b86244276aeb558d15816b3bfc58b2774d8a5710719b4b3"} Oct 07 12:46:29 crc kubenswrapper[5024]: I1007 12:46:29.614125 5024 generic.go:334] "Generic (PLEG): container finished" podID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerID="80ed6b1ba1f0ec6c4b86244276aeb558d15816b3bfc58b2774d8a5710719b4b3" exitCode=0 Oct 07 12:46:30 crc kubenswrapper[5024]: I1007 12:46:30.577511 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Oct 07 12:46:38 crc kubenswrapper[5024]: I1007 12:46:38.699773 5024 generic.go:334] "Generic (PLEG): container finished" podID="b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" containerID="db97de51cefbb6f8fec0daf7081808cce4d7c30ab6bf39d2459db02b7c29c4cd" exitCode=0 Oct 07 12:46:38 crc kubenswrapper[5024]: I1007 12:46:38.699842 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gtjxh" event={"ID":"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3","Type":"ContainerDied","Data":"db97de51cefbb6f8fec0daf7081808cce4d7c30ab6bf39d2459db02b7c29c4cd"} Oct 07 12:46:39 crc kubenswrapper[5024]: I1007 12:46:39.978990 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.052579 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52xbc\" (UniqueName: \"kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc\") pod \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.052642 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb\") pod \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.052703 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb\") pod \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.052804 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc\") pod \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.052844 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config\") pod \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\" (UID: \"347553c3-a0ad-42c8-9923-5b17ba77a7a6\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.059010 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc" (OuterVolumeSpecName: "kube-api-access-52xbc") pod "347553c3-a0ad-42c8-9923-5b17ba77a7a6" (UID: "347553c3-a0ad-42c8-9923-5b17ba77a7a6"). InnerVolumeSpecName "kube-api-access-52xbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.096316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config" (OuterVolumeSpecName: "config") pod "347553c3-a0ad-42c8-9923-5b17ba77a7a6" (UID: "347553c3-a0ad-42c8-9923-5b17ba77a7a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.099736 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "347553c3-a0ad-42c8-9923-5b17ba77a7a6" (UID: "347553c3-a0ad-42c8-9923-5b17ba77a7a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.103145 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "347553c3-a0ad-42c8-9923-5b17ba77a7a6" (UID: "347553c3-a0ad-42c8-9923-5b17ba77a7a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.114094 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "347553c3-a0ad-42c8-9923-5b17ba77a7a6" (UID: "347553c3-a0ad-42c8-9923-5b17ba77a7a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.154633 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52xbc\" (UniqueName: \"kubernetes.io/projected/347553c3-a0ad-42c8-9923-5b17ba77a7a6-kube-api-access-52xbc\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.154674 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.154684 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.154692 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.154703 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347553c3-a0ad-42c8-9923-5b17ba77a7a6-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.422069 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.443006 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.443227 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gg7jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(230742cc-7316-4b6b-8331-5a0352b4ebcb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.444465 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.563437 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle\") pod \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.563879 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config\") pod \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.563929 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6twdz\" (UniqueName: \"kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz\") pod \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\" (UID: \"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3\") " Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.567893 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz" (OuterVolumeSpecName: "kube-api-access-6twdz") pod "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" (UID: "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3"). InnerVolumeSpecName "kube-api-access-6twdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.577944 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.585066 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config" (OuterVolumeSpecName: "config") pod "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" (UID: "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.588644 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" (UID: "b1a60ce0-997e-4a92-9ed2-8326cd95d4a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.665175 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.665213 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.665224 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6twdz\" (UniqueName: \"kubernetes.io/projected/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3-kube-api-access-6twdz\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.722412 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gtjxh" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.722756 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gtjxh" event={"ID":"b1a60ce0-997e-4a92-9ed2-8326cd95d4a3","Type":"ContainerDied","Data":"2087fe11ac8c9e9bcb26f3439e65d0e5bfaac2d0212d14df334cebb333cc9ba9"} Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.722802 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2087fe11ac8c9e9bcb26f3439e65d0e5bfaac2d0212d14df334cebb333cc9ba9" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726617 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" event={"ID":"347553c3-a0ad-42c8-9923-5b17ba77a7a6","Type":"ContainerDied","Data":"01f3c6e0c86d1abc957a3533d282882a55469b38177e629d80cb766e1f79f9b7"} Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726693 5024 scope.go:117] "RemoveContainer" containerID="80ed6b1ba1f0ec6c4b86244276aeb558d15816b3bfc58b2774d8a5710719b4b3" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726628 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-central-agent" containerID="cri-o://4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645" gracePeriod=30 Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726901 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-szxq8" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726965 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="sg-core" containerID="cri-o://7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a" gracePeriod=30 Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.726995 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-notification-agent" containerID="cri-o://00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f" gracePeriod=30 Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.791558 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.809546 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-szxq8"] Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.923740 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.924652 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="init" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.924671 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="init" Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.924689 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.924696 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" Oct 07 12:46:40 crc kubenswrapper[5024]: E1007 12:46:40.924730 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" containerName="neutron-db-sync" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.924737 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" containerName="neutron-db-sync" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.925035 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" containerName="neutron-db-sync" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.925071 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" containerName="dnsmasq-dns" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.929912 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:40 crc kubenswrapper[5024]: I1007 12:46:40.961191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.002648 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.004630 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.013117 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.013228 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.013393 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jkdr9" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.014714 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.037699 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.087067 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.087466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.087635 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4682k\" (UniqueName: \"kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.087828 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.087998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.192130 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.192387 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8j6n\" (UniqueName: \"kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.192782 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.192883 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.192971 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4682k\" (UniqueName: \"kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.193045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.193149 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.193302 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.194133 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.194759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.193983 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.194705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.194060 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.195359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.217013 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4682k\" (UniqueName: \"kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k\") pod \"dnsmasq-dns-7b946d459c-qxkfc\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.296585 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.296644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.296678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.296855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.296883 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8j6n\" (UniqueName: \"kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.298792 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.303507 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.303739 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.304618 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.304794 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.316249 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8j6n\" (UniqueName: \"kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n\") pod \"neutron-56d49fff64-vcdq6\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.326258 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.765073 5024 generic.go:334] "Generic (PLEG): container finished" podID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerID="7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a" exitCode=2 Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.765552 5024 generic.go:334] "Generic (PLEG): container finished" podID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerID="4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645" exitCode=0 Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.765208 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerDied","Data":"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a"} Oct 07 12:46:41 crc kubenswrapper[5024]: I1007 12:46:41.765607 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerDied","Data":"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645"} Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.130214 5024 scope.go:117] "RemoveContainer" containerID="9617661150e5c0f6db9c6baed91914dbe52391c5b1fa80c0da2538fe2c9e169f" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.174455 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.174671 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55b7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pzbf6_openstack(2ac3ad2e-791b-4133-8417-61c5465da6ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.175835 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pzbf6" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.499326 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.619513 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.619898 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.619937 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.620010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.620102 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg7jg\" (UniqueName: \"kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.620195 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.620235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts\") pod \"230742cc-7316-4b6b-8331-5a0352b4ebcb\" (UID: \"230742cc-7316-4b6b-8331-5a0352b4ebcb\") " Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.621243 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.622412 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.625705 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts" (OuterVolumeSpecName: "scripts") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.626422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg" (OuterVolumeSpecName: "kube-api-access-gg7jg") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "kube-api-access-gg7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.659477 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.667347 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.688232 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data" (OuterVolumeSpecName: "config-data") pod "230742cc-7316-4b6b-8331-5a0352b4ebcb" (UID: "230742cc-7316-4b6b-8331-5a0352b4ebcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729778 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729841 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729853 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729889 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729976 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230742cc-7316-4b6b-8331-5a0352b4ebcb-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.729988 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230742cc-7316-4b6b-8331-5a0352b4ebcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.730001 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg7jg\" (UniqueName: \"kubernetes.io/projected/230742cc-7316-4b6b-8331-5a0352b4ebcb-kube-api-access-gg7jg\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.767636 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347553c3-a0ad-42c8-9923-5b17ba77a7a6" path="/var/lib/kubelet/pods/347553c3-a0ad-42c8-9923-5b17ba77a7a6/volumes" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.777086 5024 generic.go:334] "Generic (PLEG): container finished" podID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerID="00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f" exitCode=0 Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.777153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerDied","Data":"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f"} Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.777203 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"230742cc-7316-4b6b-8331-5a0352b4ebcb","Type":"ContainerDied","Data":"69bf01fbba5cf3b5d2b81f1cae5d883e2b4f84f360add6190b3a4c159f08d8c4"} Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.777223 5024 scope.go:117] "RemoveContainer" containerID="7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.777339 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.787274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s85hg" event={"ID":"a2f43bf9-8914-4def-a454-a4e5bd3d843b","Type":"ContainerStarted","Data":"e595cd218e3b423dca4725103664a5c480a4ec6c08a3d61b0fcc126296575362"} Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.802789 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pzbf6" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.812875 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s85hg" podStartSLOduration=5.429589453 podStartE2EDuration="26.812856449s" podCreationTimestamp="2025-10-07 12:46:16 +0000 UTC" firstStartedPulling="2025-10-07 12:46:20.762951374 +0000 UTC m=+1118.838738222" lastFinishedPulling="2025-10-07 12:46:42.14621838 +0000 UTC m=+1140.222005218" observedRunningTime="2025-10-07 12:46:42.811321574 +0000 UTC m=+1140.887108412" watchObservedRunningTime="2025-10-07 12:46:42.812856449 +0000 UTC m=+1140.888643287" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.813516 5024 scope.go:117] "RemoveContainer" containerID="00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.841915 5024 scope.go:117] "RemoveContainer" containerID="4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645" Oct 07 12:46:42 crc kubenswrapper[5024]: W1007 12:46:42.859282 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded506823_6979_4b77_8a6f_818efe132f8a.slice/crio-5fb19d9cb2c7cf54ac28906c93213cd34796d074427f5f713abb83c4d3f52953 WatchSource:0}: Error finding container 5fb19d9cb2c7cf54ac28906c93213cd34796d074427f5f713abb83c4d3f52953: Status 404 returned error can't find the container with id 5fb19d9cb2c7cf54ac28906c93213cd34796d074427f5f713abb83c4d3f52953 Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.873232 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.881317 5024 scope.go:117] "RemoveContainer" containerID="7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.882532 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a\": container with ID starting with 7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a not found: ID does not exist" containerID="7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.882645 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a"} err="failed to get container status \"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a\": rpc error: code = NotFound desc = could not find container \"7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a\": container with ID starting with 7dac2a5f98632aa1844df6767f3c2455228d27a805b403677895ffd5efd2016a not found: ID does not exist" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.882724 5024 scope.go:117] "RemoveContainer" containerID="00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.883253 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f\": container with ID starting with 00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f not found: ID does not exist" containerID="00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.883334 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f"} err="failed to get container status \"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f\": rpc error: code = NotFound desc = could not find container \"00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f\": container with ID starting with 00809a40595e8eef31fa04fda5c6ce879a689e58ede122be8acd1ddafc6ee86f not found: ID does not exist" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.883404 5024 scope.go:117] "RemoveContainer" containerID="4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.884546 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645\": container with ID starting with 4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645 not found: ID does not exist" containerID="4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.884709 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645"} err="failed to get container status \"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645\": rpc error: code = NotFound desc = could not find container \"4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645\": container with ID starting with 4d629c26264afef2da5485b6a6d7f90d550a9753f7cc3b15e7aab1062b894645 not found: ID does not exist" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.909964 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.918407 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.935760 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.936400 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-notification-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.936481 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-notification-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.936556 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-central-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.936622 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-central-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: E1007 12:46:42.936672 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="sg-core" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.936718 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="sg-core" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.936937 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-notification-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.937015 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="sg-core" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.937079 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" containerName="ceilometer-central-agent" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.938826 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.941817 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.942017 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:46:42 crc kubenswrapper[5024]: I1007 12:46:42.961423 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.033985 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034040 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034072 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034092 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5d6\" (UniqueName: \"kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034163 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034179 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.034193 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.106356 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:46:43 crc kubenswrapper[5024]: W1007 12:46:43.110170 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79c7d81c_3cc1_4dff_9cbb_99cb1e121cb3.slice/crio-7a454853960f2013b6f2a07ef479f205ced26393950cc40ef10142b05db6639f WatchSource:0}: Error finding container 7a454853960f2013b6f2a07ef479f205ced26393950cc40ef10142b05db6639f: Status 404 returned error can't find the container with id 7a454853960f2013b6f2a07ef479f205ced26393950cc40ef10142b05db6639f Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136344 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136407 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5d6\" (UniqueName: \"kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136497 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136519 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136540 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.136634 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.137001 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.137102 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.154082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.154314 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.154817 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.155286 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.159358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5d6\" (UniqueName: \"kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6\") pod \"ceilometer-0\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.226035 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cf6bc68f7-lxxqd"] Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.227875 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.235001 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hwl\" (UniqueName: \"kubernetes.io/projected/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-kube-api-access-b7hwl\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238188 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238245 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-httpd-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238291 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-ovndb-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238315 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-public-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238356 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-combined-ca-bundle\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.238393 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-internal-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.242301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf6bc68f7-lxxqd"] Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.243307 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.282854 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.338922 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339015 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-httpd-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339074 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-ovndb-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339093 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-public-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339264 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-combined-ca-bundle\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-internal-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.339345 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hwl\" (UniqueName: \"kubernetes.io/projected/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-kube-api-access-b7hwl\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.349529 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-internal-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.356668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.357607 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-combined-ca-bundle\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.359478 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-ovndb-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.363883 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-public-tls-certs\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.371405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-httpd-config\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.378516 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hwl\" (UniqueName: \"kubernetes.io/projected/fdc1f1c0-bece-42f0-b499-bdf645a1a4c9-kube-api-access-b7hwl\") pod \"neutron-6cf6bc68f7-lxxqd\" (UID: \"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9\") " pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.410495 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.708836 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:46:43 crc kubenswrapper[5024]: W1007 12:46:43.714600 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5664fb1_cd66_429c_a2b4_3d5d94e966f6.slice/crio-1c9bf4d829bc93b80aca2ab58e0ba6c4a48dd7c97a595f20cb2194fb993e764a WatchSource:0}: Error finding container 1c9bf4d829bc93b80aca2ab58e0ba6c4a48dd7c97a595f20cb2194fb993e764a: Status 404 returned error can't find the container with id 1c9bf4d829bc93b80aca2ab58e0ba6c4a48dd7c97a595f20cb2194fb993e764a Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.807725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerStarted","Data":"1c9bf4d829bc93b80aca2ab58e0ba6c4a48dd7c97a595f20cb2194fb993e764a"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.809575 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerStarted","Data":"d5ac49ce16c99b4b2fcc087c071e08c98b088775affe0d55680769cc89849ab1"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.809601 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerStarted","Data":"8b52dd8f3bce230e3160d5b6531bf5449ee13d8c795a0481097282af26e99997"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.809612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerStarted","Data":"7a454853960f2013b6f2a07ef479f205ced26393950cc40ef10142b05db6639f"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.809804 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.812545 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed506823-6979-4b77-8a6f-818efe132f8a" containerID="8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e" exitCode=0 Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.812614 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" event={"ID":"ed506823-6979-4b77-8a6f-818efe132f8a","Type":"ContainerDied","Data":"8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.812672 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" event={"ID":"ed506823-6979-4b77-8a6f-818efe132f8a","Type":"ContainerStarted","Data":"5fb19d9cb2c7cf54ac28906c93213cd34796d074427f5f713abb83c4d3f52953"} Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.828918 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56d49fff64-vcdq6" podStartSLOduration=3.828886582 podStartE2EDuration="3.828886582s" podCreationTimestamp="2025-10-07 12:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:43.826870793 +0000 UTC m=+1141.902657681" watchObservedRunningTime="2025-10-07 12:46:43.828886582 +0000 UTC m=+1141.904673420" Oct 07 12:46:43 crc kubenswrapper[5024]: I1007 12:46:43.955441 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf6bc68f7-lxxqd"] Oct 07 12:46:43 crc kubenswrapper[5024]: W1007 12:46:43.963667 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc1f1c0_bece_42f0_b499_bdf645a1a4c9.slice/crio-52339638b123fb30dcae6a90cd5809d62e452f218a634d0433f89342375d6e90 WatchSource:0}: Error finding container 52339638b123fb30dcae6a90cd5809d62e452f218a634d0433f89342375d6e90: Status 404 returned error can't find the container with id 52339638b123fb30dcae6a90cd5809d62e452f218a634d0433f89342375d6e90 Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.764110 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230742cc-7316-4b6b-8331-5a0352b4ebcb" path="/var/lib/kubelet/pods/230742cc-7316-4b6b-8331-5a0352b4ebcb/volumes" Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.824730 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" event={"ID":"ed506823-6979-4b77-8a6f-818efe132f8a","Type":"ContainerStarted","Data":"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2"} Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.824850 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.829961 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerStarted","Data":"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9"} Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.831936 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf6bc68f7-lxxqd" event={"ID":"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9","Type":"ContainerStarted","Data":"6bc4d280c6e86a9d08550140ae837f7bc6010cbc6335ab8180874870221157e4"} Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.831962 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf6bc68f7-lxxqd" event={"ID":"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9","Type":"ContainerStarted","Data":"213b979328c2e59feba948746d87c200bad3408ea1778462771617686655fba3"} Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.831975 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf6bc68f7-lxxqd" event={"ID":"fdc1f1c0-bece-42f0-b499-bdf645a1a4c9","Type":"ContainerStarted","Data":"52339638b123fb30dcae6a90cd5809d62e452f218a634d0433f89342375d6e90"} Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.832119 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.850698 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" podStartSLOduration=4.850682174 podStartE2EDuration="4.850682174s" podCreationTimestamp="2025-10-07 12:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:44.845493141 +0000 UTC m=+1142.921279979" watchObservedRunningTime="2025-10-07 12:46:44.850682174 +0000 UTC m=+1142.926469012" Oct 07 12:46:44 crc kubenswrapper[5024]: I1007 12:46:44.865535 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cf6bc68f7-lxxqd" podStartSLOduration=1.865519629 podStartE2EDuration="1.865519629s" podCreationTimestamp="2025-10-07 12:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:44.863977844 +0000 UTC m=+1142.939764692" watchObservedRunningTime="2025-10-07 12:46:44.865519629 +0000 UTC m=+1142.941306467" Oct 07 12:46:45 crc kubenswrapper[5024]: I1007 12:46:45.850275 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2f43bf9-8914-4def-a454-a4e5bd3d843b" containerID="e595cd218e3b423dca4725103664a5c480a4ec6c08a3d61b0fcc126296575362" exitCode=0 Oct 07 12:46:45 crc kubenswrapper[5024]: I1007 12:46:45.850387 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s85hg" event={"ID":"a2f43bf9-8914-4def-a454-a4e5bd3d843b","Type":"ContainerDied","Data":"e595cd218e3b423dca4725103664a5c480a4ec6c08a3d61b0fcc126296575362"} Oct 07 12:46:45 crc kubenswrapper[5024]: I1007 12:46:45.858395 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerStarted","Data":"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638"} Oct 07 12:46:45 crc kubenswrapper[5024]: I1007 12:46:45.858459 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerStarted","Data":"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a"} Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.253937 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.327412 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data\") pod \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.327524 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8bh\" (UniqueName: \"kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh\") pod \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.327642 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle\") pod \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\" (UID: \"a2f43bf9-8914-4def-a454-a4e5bd3d843b\") " Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.332925 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh" (OuterVolumeSpecName: "kube-api-access-np8bh") pod "a2f43bf9-8914-4def-a454-a4e5bd3d843b" (UID: "a2f43bf9-8914-4def-a454-a4e5bd3d843b"). InnerVolumeSpecName "kube-api-access-np8bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.333115 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a2f43bf9-8914-4def-a454-a4e5bd3d843b" (UID: "a2f43bf9-8914-4def-a454-a4e5bd3d843b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.368218 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f43bf9-8914-4def-a454-a4e5bd3d843b" (UID: "a2f43bf9-8914-4def-a454-a4e5bd3d843b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.429220 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.429257 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8bh\" (UniqueName: \"kubernetes.io/projected/a2f43bf9-8914-4def-a454-a4e5bd3d843b-kube-api-access-np8bh\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.429271 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f43bf9-8914-4def-a454-a4e5bd3d843b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.878027 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s85hg" event={"ID":"a2f43bf9-8914-4def-a454-a4e5bd3d843b","Type":"ContainerDied","Data":"e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64"} Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.879035 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64" Oct 07 12:46:47 crc kubenswrapper[5024]: I1007 12:46:47.878136 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s85hg" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.078497 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d95969967-77m89"] Oct 07 12:46:48 crc kubenswrapper[5024]: E1007 12:46:48.079177 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f43bf9-8914-4def-a454-a4e5bd3d843b" containerName="barbican-db-sync" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.079200 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f43bf9-8914-4def-a454-a4e5bd3d843b" containerName="barbican-db-sync" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.079401 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f43bf9-8914-4def-a454-a4e5bd3d843b" containerName="barbican-db-sync" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.080615 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.084774 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rjp4t" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.085009 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.085181 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 07 12:46:48 crc kubenswrapper[5024]: E1007 12:46:48.100433 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2f43bf9_8914_4def_a454_a4e5bd3d843b.slice/crio-e41ba0e2e8382157169c5dcfa560407740524adaf74403324a3f8180be0fbf64\": RecentStats: unable to find data in memory cache]" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.106216 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d95969967-77m89"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.121694 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7577458c58-jdh8c"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.123722 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.148357 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149359 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5v5c\" (UniqueName: \"kubernetes.io/projected/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-kube-api-access-m5v5c\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149411 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsmd\" (UniqueName: \"kubernetes.io/projected/84128da8-a6ce-4984-8cdd-e9b6202196c8-kube-api-access-qpsmd\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149481 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-combined-ca-bundle\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149606 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84128da8-a6ce-4984-8cdd-e9b6202196c8-logs\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149631 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-logs\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149717 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data-custom\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149741 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-combined-ca-bundle\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149802 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data-custom\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149877 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.149949 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.216308 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7577458c58-jdh8c"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.252863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.253243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5v5c\" (UniqueName: \"kubernetes.io/projected/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-kube-api-access-m5v5c\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.253298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsmd\" (UniqueName: \"kubernetes.io/projected/84128da8-a6ce-4984-8cdd-e9b6202196c8-kube-api-access-qpsmd\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.253392 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-combined-ca-bundle\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.253555 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84128da8-a6ce-4984-8cdd-e9b6202196c8-logs\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.253585 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-logs\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.254150 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data-custom\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.254205 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-combined-ca-bundle\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.254271 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data-custom\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.254310 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-logs\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.254395 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.256101 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84128da8-a6ce-4984-8cdd-e9b6202196c8-logs\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.267122 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.268099 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-combined-ca-bundle\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.274658 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data-custom\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.275014 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-config-data\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.275828 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-config-data-custom\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.282745 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5v5c\" (UniqueName: \"kubernetes.io/projected/fd4f1323-3ac0-4f6b-99fe-58ef913e21bd-kube-api-access-m5v5c\") pod \"barbican-keystone-listener-7577458c58-jdh8c\" (UID: \"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd\") " pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.283029 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84128da8-a6ce-4984-8cdd-e9b6202196c8-combined-ca-bundle\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.296638 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsmd\" (UniqueName: \"kubernetes.io/projected/84128da8-a6ce-4984-8cdd-e9b6202196c8-kube-api-access-qpsmd\") pod \"barbican-worker-7d95969967-77m89\" (UID: \"84128da8-a6ce-4984-8cdd-e9b6202196c8\") " pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.384826 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.385073 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="dnsmasq-dns" containerID="cri-o://40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2" gracePeriod=10 Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.396295 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.397641 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.405192 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.410526 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.412104 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.417247 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.421750 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d95969967-77m89" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.423930 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.458556 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459014 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459098 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459127 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459218 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxlc\" (UniqueName: \"kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459375 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g692f\" (UniqueName: \"kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.459609 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562130 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562220 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562272 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g692f\" (UniqueName: \"kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562364 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562416 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562442 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562463 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562489 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562514 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.562536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxlc\" (UniqueName: \"kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.564223 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.564509 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.564563 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.564890 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.565421 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.579827 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.587112 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.599294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.606576 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxlc\" (UniqueName: \"kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc\") pod \"dnsmasq-dns-6bb684768f-wsqxr\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.610723 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g692f\" (UniqueName: \"kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f\") pod \"barbican-api-69b586f4c8-2p4k4\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.728551 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.750637 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.971804 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerStarted","Data":"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079"} Oct 07 12:46:48 crc kubenswrapper[5024]: I1007 12:46:48.972967 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.025390 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.070009129 podStartE2EDuration="7.025369867s" podCreationTimestamp="2025-10-07 12:46:42 +0000 UTC" firstStartedPulling="2025-10-07 12:46:43.717919106 +0000 UTC m=+1141.793705944" lastFinishedPulling="2025-10-07 12:46:47.673279834 +0000 UTC m=+1145.749066682" observedRunningTime="2025-10-07 12:46:49.021457872 +0000 UTC m=+1147.097244710" watchObservedRunningTime="2025-10-07 12:46:49.025369867 +0000 UTC m=+1147.101156705" Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.095080 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d95969967-77m89"] Oct 07 12:46:49 crc kubenswrapper[5024]: W1007 12:46:49.119047 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84128da8_a6ce_4984_8cdd_e9b6202196c8.slice/crio-f64c51173f6ae9441da10157a96183b59268e90d8f3ec309ad4277845822e07b WatchSource:0}: Error finding container f64c51173f6ae9441da10157a96183b59268e90d8f3ec309ad4277845822e07b: Status 404 returned error can't find the container with id f64c51173f6ae9441da10157a96183b59268e90d8f3ec309ad4277845822e07b Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.307051 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7577458c58-jdh8c"] Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.498694 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.602315 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:46:49 crc kubenswrapper[5024]: W1007 12:46:49.622439 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b1aacd4_83f0_4eb0_9cdb_4c7e31987e34.slice/crio-df99ca55956a27b10d491a5b0a63f84d295c0d37c9d6d8358ef4b92501d4760c WatchSource:0}: Error finding container df99ca55956a27b10d491a5b0a63f84d295c0d37c9d6d8358ef4b92501d4760c: Status 404 returned error can't find the container with id df99ca55956a27b10d491a5b0a63f84d295c0d37c9d6d8358ef4b92501d4760c Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.917170 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.987259 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerStarted","Data":"df99ca55956a27b10d491a5b0a63f84d295c0d37c9d6d8358ef4b92501d4760c"} Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.989567 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed506823-6979-4b77-8a6f-818efe132f8a" containerID="40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2" exitCode=0 Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.989646 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" event={"ID":"ed506823-6979-4b77-8a6f-818efe132f8a","Type":"ContainerDied","Data":"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2"} Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.989678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" event={"ID":"ed506823-6979-4b77-8a6f-818efe132f8a","Type":"ContainerDied","Data":"5fb19d9cb2c7cf54ac28906c93213cd34796d074427f5f713abb83c4d3f52953"} Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.989679 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-qxkfc" Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.989694 5024 scope.go:117] "RemoveContainer" containerID="40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2" Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.995915 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" event={"ID":"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd","Type":"ContainerStarted","Data":"828bd5bbfd1311a4358cdf1a9305c91049b93baff9cb407f5fff4292d2e7a604"} Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.998325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d95969967-77m89" event={"ID":"84128da8-a6ce-4984-8cdd-e9b6202196c8","Type":"ContainerStarted","Data":"f64c51173f6ae9441da10157a96183b59268e90d8f3ec309ad4277845822e07b"} Oct 07 12:46:49 crc kubenswrapper[5024]: I1007 12:46:49.999124 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" event={"ID":"3cbb1818-3bab-4718-a7d0-5d75056b4c46","Type":"ContainerStarted","Data":"8759f572d4a6a43662725a8d25213fc8979efda50f6ca85b672198f740c68eda"} Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.015130 5024 scope.go:117] "RemoveContainer" containerID="8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.034633 5024 scope.go:117] "RemoveContainer" containerID="40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2" Oct 07 12:46:50 crc kubenswrapper[5024]: E1007 12:46:50.035039 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2\": container with ID starting with 40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2 not found: ID does not exist" containerID="40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.035077 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2"} err="failed to get container status \"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2\": rpc error: code = NotFound desc = could not find container \"40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2\": container with ID starting with 40905c927f3f96ab53b9c0e38bcdd8b22d55df08262d057e44d60c874ef4c3a2 not found: ID does not exist" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.035107 5024 scope.go:117] "RemoveContainer" containerID="8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e" Oct 07 12:46:50 crc kubenswrapper[5024]: E1007 12:46:50.035442 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e\": container with ID starting with 8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e not found: ID does not exist" containerID="8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.035467 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e"} err="failed to get container status \"8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e\": rpc error: code = NotFound desc = could not find container \"8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e\": container with ID starting with 8dfd22216b451728afb8cced42081679648139edd438aa3f469d1ab40268964e not found: ID does not exist" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.091866 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb\") pod \"ed506823-6979-4b77-8a6f-818efe132f8a\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.092016 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc\") pod \"ed506823-6979-4b77-8a6f-818efe132f8a\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.092637 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config\") pod \"ed506823-6979-4b77-8a6f-818efe132f8a\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.092806 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb\") pod \"ed506823-6979-4b77-8a6f-818efe132f8a\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.092893 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4682k\" (UniqueName: \"kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k\") pod \"ed506823-6979-4b77-8a6f-818efe132f8a\" (UID: \"ed506823-6979-4b77-8a6f-818efe132f8a\") " Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.097493 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k" (OuterVolumeSpecName: "kube-api-access-4682k") pod "ed506823-6979-4b77-8a6f-818efe132f8a" (UID: "ed506823-6979-4b77-8a6f-818efe132f8a"). InnerVolumeSpecName "kube-api-access-4682k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.139760 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed506823-6979-4b77-8a6f-818efe132f8a" (UID: "ed506823-6979-4b77-8a6f-818efe132f8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.147850 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config" (OuterVolumeSpecName: "config") pod "ed506823-6979-4b77-8a6f-818efe132f8a" (UID: "ed506823-6979-4b77-8a6f-818efe132f8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.152694 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed506823-6979-4b77-8a6f-818efe132f8a" (UID: "ed506823-6979-4b77-8a6f-818efe132f8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.153870 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed506823-6979-4b77-8a6f-818efe132f8a" (UID: "ed506823-6979-4b77-8a6f-818efe132f8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.195517 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.195536 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.195549 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.195558 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed506823-6979-4b77-8a6f-818efe132f8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.195568 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4682k\" (UniqueName: \"kubernetes.io/projected/ed506823-6979-4b77-8a6f-818efe132f8a-kube-api-access-4682k\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.435376 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.453325 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.464944 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dccff77b6-pr5gt" Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.470229 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-qxkfc"] Oct 07 12:46:50 crc kubenswrapper[5024]: I1007 12:46:50.778289 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" path="/var/lib/kubelet/pods/ed506823-6979-4b77-8a6f-818efe132f8a/volumes" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.012384 5024 generic.go:334] "Generic (PLEG): container finished" podID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerID="757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41" exitCode=0 Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.012504 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" event={"ID":"3cbb1818-3bab-4718-a7d0-5d75056b4c46","Type":"ContainerDied","Data":"757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41"} Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.030938 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerStarted","Data":"d00f889456697a588b7f53ac3f2acc468a6e8df567d2f607d531cf27044641d8"} Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.031027 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerStarted","Data":"b75c46af037315d6d8d3cce69dc9c6fdfbaa492906f3669319417daedf94965c"} Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.032277 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.032311 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.101724 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69b586f4c8-2p4k4" podStartSLOduration=3.101668919 podStartE2EDuration="3.101668919s" podCreationTimestamp="2025-10-07 12:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:51.095040394 +0000 UTC m=+1149.170827232" watchObservedRunningTime="2025-10-07 12:46:51.101668919 +0000 UTC m=+1149.177455767" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.271915 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b5d4dfdcd-m7phq"] Oct 07 12:46:51 crc kubenswrapper[5024]: E1007 12:46:51.272308 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="dnsmasq-dns" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.272324 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="dnsmasq-dns" Oct 07 12:46:51 crc kubenswrapper[5024]: E1007 12:46:51.272350 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="init" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.272356 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="init" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.272509 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed506823-6979-4b77-8a6f-818efe132f8a" containerName="dnsmasq-dns" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.273412 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.278205 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.282746 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.288650 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5d4dfdcd-m7phq"] Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430175 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data-custom\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430244 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-public-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430286 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6vx\" (UniqueName: \"kubernetes.io/projected/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-kube-api-access-7q6vx\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430443 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-logs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430494 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-internal-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430514 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.430544 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-combined-ca-bundle\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.537074 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-logs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.537930 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-internal-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.537576 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-logs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.537973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.538126 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-combined-ca-bundle\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.538375 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data-custom\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.538463 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-public-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.538545 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6vx\" (UniqueName: \"kubernetes.io/projected/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-kube-api-access-7q6vx\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.546269 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-internal-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.552677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.565351 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-config-data-custom\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.566575 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-public-tls-certs\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.570489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-combined-ca-bundle\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.579802 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6vx\" (UniqueName: \"kubernetes.io/projected/e7d0c7cb-ebe4-4183-a7e5-56627d4a5073-kube-api-access-7q6vx\") pod \"barbican-api-7b5d4dfdcd-m7phq\" (UID: \"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073\") " pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:51 crc kubenswrapper[5024]: I1007 12:46:51.592234 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:52 crc kubenswrapper[5024]: I1007 12:46:52.055364 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" event={"ID":"3cbb1818-3bab-4718-a7d0-5d75056b4c46","Type":"ContainerStarted","Data":"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a"} Oct 07 12:46:52 crc kubenswrapper[5024]: I1007 12:46:52.055898 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:52 crc kubenswrapper[5024]: I1007 12:46:52.082034 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" podStartSLOduration=4.081992253 podStartE2EDuration="4.081992253s" podCreationTimestamp="2025-10-07 12:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:52.077215783 +0000 UTC m=+1150.153002641" watchObservedRunningTime="2025-10-07 12:46:52.081992253 +0000 UTC m=+1150.157779101" Oct 07 12:46:52 crc kubenswrapper[5024]: I1007 12:46:52.552527 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5d4dfdcd-m7phq"] Oct 07 12:46:52 crc kubenswrapper[5024]: W1007 12:46:52.569858 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7d0c7cb_ebe4_4183_a7e5_56627d4a5073.slice/crio-f451dc5d1adda495f2ae90c6cf96d3d6f3f3612ee7c8780a4621d4feae112dd0 WatchSource:0}: Error finding container f451dc5d1adda495f2ae90c6cf96d3d6f3f3612ee7c8780a4621d4feae112dd0: Status 404 returned error can't find the container with id f451dc5d1adda495f2ae90c6cf96d3d6f3f3612ee7c8780a4621d4feae112dd0 Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.065054 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" event={"ID":"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd","Type":"ContainerStarted","Data":"a510e0d2821dd7416299b981cf7f15735b570b0e65dcaf86dd5e25e3544b7599"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.065587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" event={"ID":"fd4f1323-3ac0-4f6b-99fe-58ef913e21bd","Type":"ContainerStarted","Data":"68792f2c430b892311ca9889c3174bc1b1f5f1ca7ac3e1be08b5e7c58a3b87d4"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.068671 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d95969967-77m89" event={"ID":"84128da8-a6ce-4984-8cdd-e9b6202196c8","Type":"ContainerStarted","Data":"429fbe1615acdb37f45e217d73b709862f052dacd2dc8d592df4f674656d2d9d"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.068730 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d95969967-77m89" event={"ID":"84128da8-a6ce-4984-8cdd-e9b6202196c8","Type":"ContainerStarted","Data":"f10c1a4000b940ba409efca90c76d2f8568947749ef5eb7ab787851fcec903e1"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.070792 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" event={"ID":"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073","Type":"ContainerStarted","Data":"ac5cfb209665886bc7764ff24b6387ad7d59a80676549c82704769cf0356302f"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.070826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" event={"ID":"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073","Type":"ContainerStarted","Data":"4a41a1d84783987fd8145644eb265ca70803d43ce477d3010fe395cd6d8a5a85"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.070842 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" event={"ID":"e7d0c7cb-ebe4-4183-a7e5-56627d4a5073","Type":"ContainerStarted","Data":"f451dc5d1adda495f2ae90c6cf96d3d6f3f3612ee7c8780a4621d4feae112dd0"} Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.086840 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7577458c58-jdh8c" podStartSLOduration=2.347847259 podStartE2EDuration="5.086821546s" podCreationTimestamp="2025-10-07 12:46:48 +0000 UTC" firstStartedPulling="2025-10-07 12:46:49.321287589 +0000 UTC m=+1147.397074427" lastFinishedPulling="2025-10-07 12:46:52.060261876 +0000 UTC m=+1150.136048714" observedRunningTime="2025-10-07 12:46:53.080552302 +0000 UTC m=+1151.156339140" watchObservedRunningTime="2025-10-07 12:46:53.086821546 +0000 UTC m=+1151.162608384" Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.117329 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d95969967-77m89" podStartSLOduration=2.175853674 podStartE2EDuration="5.11729073s" podCreationTimestamp="2025-10-07 12:46:48 +0000 UTC" firstStartedPulling="2025-10-07 12:46:49.125191976 +0000 UTC m=+1147.200978814" lastFinishedPulling="2025-10-07 12:46:52.066629032 +0000 UTC m=+1150.142415870" observedRunningTime="2025-10-07 12:46:53.112603513 +0000 UTC m=+1151.188390351" watchObservedRunningTime="2025-10-07 12:46:53.11729073 +0000 UTC m=+1151.193077568" Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.171423 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" podStartSLOduration=2.171399248 podStartE2EDuration="2.171399248s" podCreationTimestamp="2025-10-07 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:46:53.149032171 +0000 UTC m=+1151.224819029" watchObservedRunningTime="2025-10-07 12:46:53.171399248 +0000 UTC m=+1151.247186086" Oct 07 12:46:53 crc kubenswrapper[5024]: I1007 12:46:53.846673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d8dff8cf4-cpjpz" Oct 07 12:46:54 crc kubenswrapper[5024]: I1007 12:46:54.079257 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:54 crc kubenswrapper[5024]: I1007 12:46:54.079609 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.479043 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.480514 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.484396 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xmtnt" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.486034 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.486818 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.492464 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.624801 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzk2h\" (UniqueName: \"kubernetes.io/projected/7c26baed-556b-410e-b33b-b4ac7782249b-kube-api-access-zzk2h\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.624867 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.624913 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.625039 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.726004 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: E1007 12:46:55.726678 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-zzk2h openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="7c26baed-556b-410e-b33b-b4ac7782249b" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.727128 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzk2h\" (UniqueName: \"kubernetes.io/projected/7c26baed-556b-410e-b33b-b4ac7782249b-kube-api-access-zzk2h\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.727225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.727268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.727298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.729355 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: E1007 12:46:55.730375 5024 projected.go:194] Error preparing data for projected volume kube-api-access-zzk2h for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 07 12:46:55 crc kubenswrapper[5024]: E1007 12:46:55.730457 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7c26baed-556b-410e-b33b-b4ac7782249b-kube-api-access-zzk2h podName:7c26baed-556b-410e-b33b-b4ac7782249b nodeName:}" failed. No retries permitted until 2025-10-07 12:46:56.230434025 +0000 UTC m=+1154.306220853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zzk2h" (UniqueName: "kubernetes.io/projected/7c26baed-556b-410e-b33b-b4ac7782249b-kube-api-access-zzk2h") pod "openstackclient" (UID: "7c26baed-556b-410e-b33b-b4ac7782249b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.732902 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.735062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.738780 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.757285 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.758595 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.773196 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.931565 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.932060 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.932180 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6svxz\" (UniqueName: \"kubernetes.io/projected/7564967b-adb6-452b-b141-c91016f1c9d8-kube-api-access-6svxz\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:55 crc kubenswrapper[5024]: I1007 12:46:55.932221 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.038456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.038525 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6svxz\" (UniqueName: \"kubernetes.io/projected/7564967b-adb6-452b-b141-c91016f1c9d8-kube-api-access-6svxz\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.038554 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.038667 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.039517 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.042580 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.042830 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7564967b-adb6-452b-b141-c91016f1c9d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.060120 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6svxz\" (UniqueName: \"kubernetes.io/projected/7564967b-adb6-452b-b141-c91016f1c9d8-kube-api-access-6svxz\") pod \"openstackclient\" (UID: \"7564967b-adb6-452b-b141-c91016f1c9d8\") " pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.098595 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.101667 5024 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7c26baed-556b-410e-b33b-b4ac7782249b" podUID="7564967b-adb6-452b-b141-c91016f1c9d8" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.120541 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.133807 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.243670 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle\") pod \"7c26baed-556b-410e-b33b-b4ac7782249b\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.244037 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret\") pod \"7c26baed-556b-410e-b33b-b4ac7782249b\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.244157 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config\") pod \"7c26baed-556b-410e-b33b-b4ac7782249b\" (UID: \"7c26baed-556b-410e-b33b-b4ac7782249b\") " Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.245061 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzk2h\" (UniqueName: \"kubernetes.io/projected/7c26baed-556b-410e-b33b-b4ac7782249b-kube-api-access-zzk2h\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.246897 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7c26baed-556b-410e-b33b-b4ac7782249b" (UID: "7c26baed-556b-410e-b33b-b4ac7782249b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.249293 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c26baed-556b-410e-b33b-b4ac7782249b" (UID: "7c26baed-556b-410e-b33b-b4ac7782249b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.253444 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7c26baed-556b-410e-b33b-b4ac7782249b" (UID: "7c26baed-556b-410e-b33b-b4ac7782249b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.347120 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.347159 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.347171 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c26baed-556b-410e-b33b-b4ac7782249b-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.565568 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 07 12:46:56 crc kubenswrapper[5024]: W1007 12:46:56.574673 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7564967b_adb6_452b_b141_c91016f1c9d8.slice/crio-5d3656d63c45670c5c0b7f3ae1f0f40a1df63399bf0ff330ab0f8d4d03aceb3e WatchSource:0}: Error finding container 5d3656d63c45670c5c0b7f3ae1f0f40a1df63399bf0ff330ab0f8d4d03aceb3e: Status 404 returned error can't find the container with id 5d3656d63c45670c5c0b7f3ae1f0f40a1df63399bf0ff330ab0f8d4d03aceb3e Oct 07 12:46:56 crc kubenswrapper[5024]: I1007 12:46:56.766704 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c26baed-556b-410e-b33b-b4ac7782249b" path="/var/lib/kubelet/pods/7c26baed-556b-410e-b33b-b4ac7782249b/volumes" Oct 07 12:46:57 crc kubenswrapper[5024]: I1007 12:46:57.109798 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7564967b-adb6-452b-b141-c91016f1c9d8","Type":"ContainerStarted","Data":"5d3656d63c45670c5c0b7f3ae1f0f40a1df63399bf0ff330ab0f8d4d03aceb3e"} Oct 07 12:46:57 crc kubenswrapper[5024]: I1007 12:46:57.111943 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 07 12:46:57 crc kubenswrapper[5024]: I1007 12:46:57.111937 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pzbf6" event={"ID":"2ac3ad2e-791b-4133-8417-61c5465da6ea","Type":"ContainerStarted","Data":"046c27e6f898b3cb5e122a7733d11884ec734fe9c4c7e4d7f954ed9f18a52f89"} Oct 07 12:46:57 crc kubenswrapper[5024]: I1007 12:46:57.130653 5024 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7c26baed-556b-410e-b33b-b4ac7782249b" podUID="7564967b-adb6-452b-b141-c91016f1c9d8" Oct 07 12:46:57 crc kubenswrapper[5024]: I1007 12:46:57.131624 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pzbf6" podStartSLOduration=5.31287981 podStartE2EDuration="41.131607197s" podCreationTimestamp="2025-10-07 12:46:16 +0000 UTC" firstStartedPulling="2025-10-07 12:46:20.430401348 +0000 UTC m=+1118.506188186" lastFinishedPulling="2025-10-07 12:46:56.249128735 +0000 UTC m=+1154.324915573" observedRunningTime="2025-10-07 12:46:57.125198079 +0000 UTC m=+1155.200984917" watchObservedRunningTime="2025-10-07 12:46:57.131607197 +0000 UTC m=+1155.207394035" Oct 07 12:46:58 crc kubenswrapper[5024]: I1007 12:46:58.731313 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:46:58 crc kubenswrapper[5024]: I1007 12:46:58.799857 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:46:58 crc kubenswrapper[5024]: I1007 12:46:58.800460 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="dnsmasq-dns" containerID="cri-o://4271e11fa3c3ebd964dfbca7cf605efc900fb2f27d0cca2395b83e18eff47da7" gracePeriod=10 Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.155086 5024 generic.go:334] "Generic (PLEG): container finished" podID="2274abe4-ee52-4816-ab0b-31774782dc36" containerID="4271e11fa3c3ebd964dfbca7cf605efc900fb2f27d0cca2395b83e18eff47da7" exitCode=0 Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.155210 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" event={"ID":"2274abe4-ee52-4816-ab0b-31774782dc36","Type":"ContainerDied","Data":"4271e11fa3c3ebd964dfbca7cf605efc900fb2f27d0cca2395b83e18eff47da7"} Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.163301 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.798883 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.931316 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc\") pod \"2274abe4-ee52-4816-ab0b-31774782dc36\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.931398 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9xdn\" (UniqueName: \"kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn\") pod \"2274abe4-ee52-4816-ab0b-31774782dc36\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.931507 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config\") pod \"2274abe4-ee52-4816-ab0b-31774782dc36\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.931554 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb\") pod \"2274abe4-ee52-4816-ab0b-31774782dc36\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.931580 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb\") pod \"2274abe4-ee52-4816-ab0b-31774782dc36\" (UID: \"2274abe4-ee52-4816-ab0b-31774782dc36\") " Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.942589 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn" (OuterVolumeSpecName: "kube-api-access-b9xdn") pod "2274abe4-ee52-4816-ab0b-31774782dc36" (UID: "2274abe4-ee52-4816-ab0b-31774782dc36"). InnerVolumeSpecName "kube-api-access-b9xdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:46:59 crc kubenswrapper[5024]: I1007 12:46:59.991981 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config" (OuterVolumeSpecName: "config") pod "2274abe4-ee52-4816-ab0b-31774782dc36" (UID: "2274abe4-ee52-4816-ab0b-31774782dc36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.009601 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2274abe4-ee52-4816-ab0b-31774782dc36" (UID: "2274abe4-ee52-4816-ab0b-31774782dc36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.026659 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2274abe4-ee52-4816-ab0b-31774782dc36" (UID: "2274abe4-ee52-4816-ab0b-31774782dc36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.035098 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.035132 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.035230 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.035244 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9xdn\" (UniqueName: \"kubernetes.io/projected/2274abe4-ee52-4816-ab0b-31774782dc36-kube-api-access-b9xdn\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.060391 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2274abe4-ee52-4816-ab0b-31774782dc36" (UID: "2274abe4-ee52-4816-ab0b-31774782dc36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.136416 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2274abe4-ee52-4816-ab0b-31774782dc36-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.166432 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" event={"ID":"2274abe4-ee52-4816-ab0b-31774782dc36","Type":"ContainerDied","Data":"b24b216831768f5bc05b341cd7166670fde940728aac04ca07e328cd24eba9da"} Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.166489 5024 scope.go:117] "RemoveContainer" containerID="4271e11fa3c3ebd964dfbca7cf605efc900fb2f27d0cca2395b83e18eff47da7" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.166488 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-hsjwt" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.202015 5024 scope.go:117] "RemoveContainer" containerID="ef323b81749f59b5c2bb55a30be5ba51d994c1b2b50eb62b4a6573729e7b6f2d" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.205397 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.216039 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-hsjwt"] Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.276433 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.560529 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:47:00 crc kubenswrapper[5024]: I1007 12:47:00.764525 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" path="/var/lib/kubelet/pods/2274abe4-ee52-4816-ab0b-31774782dc36/volumes" Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.194637 5024 generic.go:334] "Generic (PLEG): container finished" podID="2ac3ad2e-791b-4133-8417-61c5465da6ea" containerID="046c27e6f898b3cb5e122a7733d11884ec734fe9c4c7e4d7f954ed9f18a52f89" exitCode=0 Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.194724 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pzbf6" event={"ID":"2ac3ad2e-791b-4133-8417-61c5465da6ea","Type":"ContainerDied","Data":"046c27e6f898b3cb5e122a7733d11884ec734fe9c4c7e4d7f954ed9f18a52f89"} Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.307816 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.425574 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5d4dfdcd-m7phq" Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.499688 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.499909 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69b586f4c8-2p4k4" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api-log" containerID="cri-o://b75c46af037315d6d8d3cce69dc9c6fdfbaa492906f3669319417daedf94965c" gracePeriod=30 Oct 07 12:47:03 crc kubenswrapper[5024]: I1007 12:47:03.500385 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69b586f4c8-2p4k4" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api" containerID="cri-o://d00f889456697a588b7f53ac3f2acc468a6e8df567d2f607d531cf27044641d8" gracePeriod=30 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.205341 5024 generic.go:334] "Generic (PLEG): container finished" podID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerID="b75c46af037315d6d8d3cce69dc9c6fdfbaa492906f3669319417daedf94965c" exitCode=143 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.205426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerDied","Data":"b75c46af037315d6d8d3cce69dc9c6fdfbaa492906f3669319417daedf94965c"} Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.640089 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sh44r"] Oct 07 12:47:04 crc kubenswrapper[5024]: E1007 12:47:04.640533 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="dnsmasq-dns" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.640556 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="dnsmasq-dns" Oct 07 12:47:04 crc kubenswrapper[5024]: E1007 12:47:04.640600 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="init" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.640610 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="init" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.640843 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2274abe4-ee52-4816-ab0b-31774782dc36" containerName="dnsmasq-dns" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.641619 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.682285 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.682610 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-central-agent" containerID="cri-o://be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9" gracePeriod=30 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.682676 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="sg-core" containerID="cri-o://866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638" gracePeriod=30 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.682657 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="proxy-httpd" containerID="cri-o://5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079" gracePeriod=30 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.682716 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-notification-agent" containerID="cri-o://761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a" gracePeriod=30 Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.697051 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sh44r"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.699232 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.749836 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-z29vv"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.752221 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.785542 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z29vv"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.819074 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smnd\" (UniqueName: \"kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd\") pod \"nova-api-db-create-sh44r\" (UID: \"a971f3af-a1cf-4423-b776-bb662957878c\") " pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.851960 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kqnsq"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.853112 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.859017 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kqnsq"] Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.920864 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdlg\" (UniqueName: \"kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg\") pod \"nova-cell0-db-create-z29vv\" (UID: \"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51\") " pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.920930 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smnd\" (UniqueName: \"kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd\") pod \"nova-api-db-create-sh44r\" (UID: \"a971f3af-a1cf-4423-b776-bb662957878c\") " pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.941917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smnd\" (UniqueName: \"kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd\") pod \"nova-api-db-create-sh44r\" (UID: \"a971f3af-a1cf-4423-b776-bb662957878c\") " pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:04 crc kubenswrapper[5024]: I1007 12:47:04.964169 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.023472 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdlg\" (UniqueName: \"kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg\") pod \"nova-cell0-db-create-z29vv\" (UID: \"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51\") " pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.023532 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsmj\" (UniqueName: \"kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj\") pod \"nova-cell1-db-create-kqnsq\" (UID: \"65244b6b-8bbf-47a0-b0e8-568a6fdba17d\") " pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.054932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdlg\" (UniqueName: \"kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg\") pod \"nova-cell0-db-create-z29vv\" (UID: \"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51\") " pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.071620 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.125693 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rsmj\" (UniqueName: \"kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj\") pod \"nova-cell1-db-create-kqnsq\" (UID: \"65244b6b-8bbf-47a0-b0e8-568a6fdba17d\") " pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.162768 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rsmj\" (UniqueName: \"kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj\") pod \"nova-cell1-db-create-kqnsq\" (UID: \"65244b6b-8bbf-47a0-b0e8-568a6fdba17d\") " pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.166792 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228775 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerID="5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079" exitCode=0 Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228855 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerID="866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638" exitCode=2 Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228864 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerID="be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9" exitCode=0 Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerDied","Data":"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079"} Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerDied","Data":"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638"} Oct 07 12:47:05 crc kubenswrapper[5024]: I1007 12:47:05.228920 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerDied","Data":"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9"} Oct 07 12:47:06 crc kubenswrapper[5024]: I1007 12:47:06.659817 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69b586f4c8-2p4k4" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:38774->10.217.0.148:9311: read: connection reset by peer" Oct 07 12:47:06 crc kubenswrapper[5024]: I1007 12:47:06.659933 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69b586f4c8-2p4k4" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:38770->10.217.0.148:9311: read: connection reset by peer" Oct 07 12:47:07 crc kubenswrapper[5024]: I1007 12:47:07.247480 5024 generic.go:334] "Generic (PLEG): container finished" podID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerID="d00f889456697a588b7f53ac3f2acc468a6e8df567d2f607d531cf27044641d8" exitCode=0 Oct 07 12:47:07 crc kubenswrapper[5024]: I1007 12:47:07.247547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerDied","Data":"d00f889456697a588b7f53ac3f2acc468a6e8df567d2f607d531cf27044641d8"} Oct 07 12:47:07 crc kubenswrapper[5024]: I1007 12:47:07.917076 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.066694 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081070 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081196 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081216 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081324 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55b7v\" (UniqueName: \"kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081346 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.081407 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data\") pod \"2ac3ad2e-791b-4133-8417-61c5465da6ea\" (UID: \"2ac3ad2e-791b-4133-8417-61c5465da6ea\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.082161 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.089624 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v" (OuterVolumeSpecName: "kube-api-access-55b7v") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "kube-api-access-55b7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.103925 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.106297 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts" (OuterVolumeSpecName: "scripts") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.130195 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.136255 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data" (OuterVolumeSpecName: "config-data") pod "2ac3ad2e-791b-4133-8417-61c5465da6ea" (UID: "2ac3ad2e-791b-4133-8417-61c5465da6ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.173201 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.182828 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data\") pod \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.182862 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs\") pod \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.182933 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle\") pod \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.183061 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g692f\" (UniqueName: \"kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f\") pod \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.183193 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom\") pod \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\" (UID: \"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.183620 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs" (OuterVolumeSpecName: "logs") pod "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" (UID: "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184051 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184065 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ac3ad2e-791b-4133-8417-61c5465da6ea-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184075 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55b7v\" (UniqueName: \"kubernetes.io/projected/2ac3ad2e-791b-4133-8417-61c5465da6ea-kube-api-access-55b7v\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184086 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184095 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184103 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ad2e-791b-4133-8417-61c5465da6ea-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.184111 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.186875 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f" (OuterVolumeSpecName: "kube-api-access-g692f") pod "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" (UID: "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34"). InnerVolumeSpecName "kube-api-access-g692f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.188316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" (UID: "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.214176 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" (UID: "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.230769 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data" (OuterVolumeSpecName: "config-data") pod "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" (UID: "6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.257396 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pzbf6" event={"ID":"2ac3ad2e-791b-4133-8417-61c5465da6ea","Type":"ContainerDied","Data":"f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a"} Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.257639 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e1066b7c6d9a85c674a369046f8f8b0167cf8b9cc370b57c6c4d354dc5563a" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.257464 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pzbf6" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.258923 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7564967b-adb6-452b-b141-c91016f1c9d8","Type":"ContainerStarted","Data":"63d2915ac2c2b6b448536fed89a281b6f4bd364db67e34af39cefa2c55cd69e0"} Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.262525 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69b586f4c8-2p4k4" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.262522 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69b586f4c8-2p4k4" event={"ID":"6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34","Type":"ContainerDied","Data":"df99ca55956a27b10d491a5b0a63f84d295c0d37c9d6d8358ef4b92501d4760c"} Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.262638 5024 scope.go:117] "RemoveContainer" containerID="d00f889456697a588b7f53ac3f2acc468a6e8df567d2f607d531cf27044641d8" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.278752 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerID="761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a" exitCode=0 Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.278788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerDied","Data":"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a"} Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.278812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5664fb1-cd66-429c-a2b4-3d5d94e966f6","Type":"ContainerDied","Data":"1c9bf4d829bc93b80aca2ab58e0ba6c4a48dd7c97a595f20cb2194fb993e764a"} Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.278869 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294340 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294422 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294591 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294622 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294638 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5d6\" (UniqueName: \"kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.294678 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.295192 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g692f\" (UniqueName: \"kubernetes.io/projected/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-kube-api-access-g692f\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.295210 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.295240 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.295249 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.296152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.296568 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.306483 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.054167305 podStartE2EDuration="13.306463638s" podCreationTimestamp="2025-10-07 12:46:55 +0000 UTC" firstStartedPulling="2025-10-07 12:46:56.576796498 +0000 UTC m=+1154.652583326" lastFinishedPulling="2025-10-07 12:47:07.829092821 +0000 UTC m=+1165.904879659" observedRunningTime="2025-10-07 12:47:08.278379214 +0000 UTC m=+1166.354166052" watchObservedRunningTime="2025-10-07 12:47:08.306463638 +0000 UTC m=+1166.382250476" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.312892 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6" (OuterVolumeSpecName: "kube-api-access-4l5d6") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "kube-api-access-4l5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.313609 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts" (OuterVolumeSpecName: "scripts") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.323320 5024 scope.go:117] "RemoveContainer" containerID="b75c46af037315d6d8d3cce69dc9c6fdfbaa492906f3669319417daedf94965c" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.323933 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.335209 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69b586f4c8-2p4k4"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.339565 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sh44r"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.344953 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.345949 5024 scope.go:117] "RemoveContainer" containerID="5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.377484 5024 scope.go:117] "RemoveContainer" containerID="866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.395397 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396017 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") pod \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\" (UID: \"e5664fb1-cd66-429c-a2b4-3d5d94e966f6\") " Oct 07 12:47:08 crc kubenswrapper[5024]: W1007 12:47:08.396110 5024 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e5664fb1-cd66-429c-a2b4-3d5d94e966f6/volumes/kubernetes.io~secret/combined-ca-bundle Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396545 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396571 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396584 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396596 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l5d6\" (UniqueName: \"kubernetes.io/projected/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-kube-api-access-4l5d6\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396607 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.396618 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.403238 5024 scope.go:117] "RemoveContainer" containerID="761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.426545 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data" (OuterVolumeSpecName: "config-data") pod "e5664fb1-cd66-429c-a2b4-3d5d94e966f6" (UID: "e5664fb1-cd66-429c-a2b4-3d5d94e966f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.440971 5024 scope.go:117] "RemoveContainer" containerID="be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.444078 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kqnsq"] Oct 07 12:47:08 crc kubenswrapper[5024]: W1007 12:47:08.449627 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346dd65c_ac64_4c3b_b9d0_6f4d16e19f51.slice/crio-e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070 WatchSource:0}: Error finding container e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070: Status 404 returned error can't find the container with id e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070 Oct 07 12:47:08 crc kubenswrapper[5024]: W1007 12:47:08.450936 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65244b6b_8bbf_47a0_b0e8_568a6fdba17d.slice/crio-8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577 WatchSource:0}: Error finding container 8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577: Status 404 returned error can't find the container with id 8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577 Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.451937 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z29vv"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.463322 5024 scope.go:117] "RemoveContainer" containerID="5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.463580 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079\": container with ID starting with 5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079 not found: ID does not exist" containerID="5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.463612 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079"} err="failed to get container status \"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079\": rpc error: code = NotFound desc = could not find container \"5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079\": container with ID starting with 5327e081dee08e0902d81ea2823d09b30bd3925fc747669e6b0868e674482079 not found: ID does not exist" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.463634 5024 scope.go:117] "RemoveContainer" containerID="866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.463846 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638\": container with ID starting with 866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638 not found: ID does not exist" containerID="866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.463920 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638"} err="failed to get container status \"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638\": rpc error: code = NotFound desc = could not find container \"866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638\": container with ID starting with 866d8e655101df38a75187a736e9d094eed461c76dadfa4d88e3a27fbed5a638 not found: ID does not exist" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.463959 5024 scope.go:117] "RemoveContainer" containerID="761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.464348 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a\": container with ID starting with 761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a not found: ID does not exist" containerID="761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.464498 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a"} err="failed to get container status \"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a\": rpc error: code = NotFound desc = could not find container \"761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a\": container with ID starting with 761f98543d20ddc9bdccc0288ad1e1a6795257bcd0ede7237c9d03302c10076a not found: ID does not exist" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.464605 5024 scope.go:117] "RemoveContainer" containerID="be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.464977 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9\": container with ID starting with be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9 not found: ID does not exist" containerID="be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.465000 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9"} err="failed to get container status \"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9\": rpc error: code = NotFound desc = could not find container \"be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9\": container with ID starting with be1ab799864eb1db2b0e4ca4a6197451c7ec9d50318f4aa426fee505341a9ec9 not found: ID does not exist" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.497710 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5664fb1-cd66-429c-a2b4-3d5d94e966f6-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.673883 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.683287 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.702852 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.703843 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-central-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.703870 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-central-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.703910 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="sg-core" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.703921 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="sg-core" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.703934 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-notification-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.703945 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-notification-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.703961 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="proxy-httpd" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.703969 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="proxy-httpd" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.703986 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" containerName="cinder-db-sync" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.703995 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" containerName="cinder-db-sync" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.704017 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704025 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api" Oct 07 12:47:08 crc kubenswrapper[5024]: E1007 12:47:08.704038 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api-log" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704047 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api-log" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704251 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api-log" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704271 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" containerName="cinder-db-sync" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704287 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-central-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704299 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="sg-core" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704315 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" containerName="barbican-api" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704329 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="proxy-httpd" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.704346 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" containerName="ceilometer-notification-agent" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.706713 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.710339 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.710598 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.710860 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.761554 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34" path="/var/lib/kubelet/pods/6b1aacd4-83f0-4eb0-9cdb-4c7e31987e34/volumes" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.762447 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5664fb1-cd66-429c-a2b4-3d5d94e966f6" path="/var/lib/kubelet/pods/e5664fb1-cd66-429c-a2b4-3d5d94e966f6/volumes" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.802801 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803006 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803211 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqnpq\" (UniqueName: \"kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803257 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803323 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803390 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.803446 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905338 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqnpq\" (UniqueName: \"kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905387 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905449 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905502 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905574 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905621 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.905659 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.906325 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.906767 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.909579 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.911405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.911692 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.913064 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:08 crc kubenswrapper[5024]: I1007 12:47:08.936203 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqnpq\" (UniqueName: \"kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq\") pod \"ceilometer-0\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " pod="openstack/ceilometer-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.102843 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.132585 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.134035 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.181909 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.300405 5024 generic.go:334] "Generic (PLEG): container finished" podID="a971f3af-a1cf-4423-b776-bb662957878c" containerID="e929a9535d5df0fccc6684896091c6f844b7c2817d948f988a8fffdbebe55040" exitCode=0 Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.300473 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh44r" event={"ID":"a971f3af-a1cf-4423-b776-bb662957878c","Type":"ContainerDied","Data":"e929a9535d5df0fccc6684896091c6f844b7c2817d948f988a8fffdbebe55040"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.300501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh44r" event={"ID":"a971f3af-a1cf-4423-b776-bb662957878c","Type":"ContainerStarted","Data":"87a64ada6814fddb34be6dc0b455176a76a41a77e8797a73b98b79c694347d57"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.305783 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.307559 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.310516 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5pkz6" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.310716 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.310884 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.311031 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.314533 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.319476 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckc7d\" (UniqueName: \"kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.319591 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.319620 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.319693 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.319722 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.338299 5024 generic.go:334] "Generic (PLEG): container finished" podID="346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" containerID="ba88178fe388c78a0ee60285f177f2731bf95c47c1ae10bfcb3a58536b32d8a6" exitCode=0 Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.338362 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z29vv" event={"ID":"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51","Type":"ContainerDied","Data":"ba88178fe388c78a0ee60285f177f2731bf95c47c1ae10bfcb3a58536b32d8a6"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.338390 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z29vv" event={"ID":"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51","Type":"ContainerStarted","Data":"e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.348322 5024 generic.go:334] "Generic (PLEG): container finished" podID="65244b6b-8bbf-47a0-b0e8-568a6fdba17d" containerID="f4ba56bdde2c651f7b22446722196f54a3d7126f0f22a8b02c9fa9c8c254e56b" exitCode=0 Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.348613 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqnsq" event={"ID":"65244b6b-8bbf-47a0-b0e8-568a6fdba17d","Type":"ContainerDied","Data":"f4ba56bdde2c651f7b22446722196f54a3d7126f0f22a8b02c9fa9c8c254e56b"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.348649 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqnsq" event={"ID":"65244b6b-8bbf-47a0-b0e8-568a6fdba17d","Type":"ContainerStarted","Data":"8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577"} Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.370200 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.372088 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.381551 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.394064 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421341 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421383 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421406 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6dq\" (UniqueName: \"kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421626 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421707 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421747 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.421958 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckc7d\" (UniqueName: \"kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422028 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422068 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422090 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422510 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422528 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.422918 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.444211 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckc7d\" (UniqueName: \"kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d\") pod \"dnsmasq-dns-6d97fcdd8f-s98w8\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.521484 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523511 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523558 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523610 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523639 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523767 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523799 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsvw\" (UniqueName: \"kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523810 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6dq\" (UniqueName: \"kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.523984 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.527320 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.527767 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.528013 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.528356 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.543284 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6dq\" (UniqueName: \"kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq\") pod \"cinder-scheduler-0\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625466 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsvw\" (UniqueName: \"kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625517 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625621 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625655 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625681 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625711 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625738 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.625875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.626668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.630619 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.633362 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.635295 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.637162 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.642298 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.649820 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsvw\" (UniqueName: \"kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw\") pod \"cinder-api-0\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " pod="openstack/cinder-api-0" Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.662904 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:09 crc kubenswrapper[5024]: I1007 12:47:09.708976 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.070981 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.211526 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:10 crc kubenswrapper[5024]: W1007 12:47:10.230233 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b32ce35_3301_4125_9a0f_b6fe6993dddb.slice/crio-c329a7ba7198a73d1efc922fa2dce2d67291048e1cd84874848039e4edb90478 WatchSource:0}: Error finding container c329a7ba7198a73d1efc922fa2dce2d67291048e1cd84874848039e4edb90478: Status 404 returned error can't find the container with id c329a7ba7198a73d1efc922fa2dce2d67291048e1cd84874848039e4edb90478 Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.294735 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:10 crc kubenswrapper[5024]: W1007 12:47:10.315255 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e6dfe73_cc16_4586_953c_ebe0a1c8ed8e.slice/crio-22ef469453fc52481360ed4f4dbaaeb59ac30d564770efb248644214fe66c43a WatchSource:0}: Error finding container 22ef469453fc52481360ed4f4dbaaeb59ac30d564770efb248644214fe66c43a: Status 404 returned error can't find the container with id 22ef469453fc52481360ed4f4dbaaeb59ac30d564770efb248644214fe66c43a Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.360382 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ebb6498-3228-4baa-984b-d76104094326" containerID="fd9c7ed4f5a0bb958ca84e1d28678db7b4f7f9c0bbd1d385d7a561980c2b1158" exitCode=0 Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.360469 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" event={"ID":"0ebb6498-3228-4baa-984b-d76104094326","Type":"ContainerDied","Data":"fd9c7ed4f5a0bb958ca84e1d28678db7b4f7f9c0bbd1d385d7a561980c2b1158"} Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.360565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" event={"ID":"0ebb6498-3228-4baa-984b-d76104094326","Type":"ContainerStarted","Data":"cb02a4ec54ed84447e1d2c059e0594c67766da328db2427c6be94ce1206257b1"} Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.364353 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerStarted","Data":"c329a7ba7198a73d1efc922fa2dce2d67291048e1cd84874848039e4edb90478"} Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.366259 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerStarted","Data":"22ef469453fc52481360ed4f4dbaaeb59ac30d564770efb248644214fe66c43a"} Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.368264 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerStarted","Data":"8c8458af6e69160d15ba07e19e9fad587c536841572cccbef1a1c053f72ab0b2"} Oct 07 12:47:10 crc kubenswrapper[5024]: I1007 12:47:10.886336 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.059104 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5smnd\" (UniqueName: \"kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd\") pod \"a971f3af-a1cf-4423-b776-bb662957878c\" (UID: \"a971f3af-a1cf-4423-b776-bb662957878c\") " Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.066347 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd" (OuterVolumeSpecName: "kube-api-access-5smnd") pod "a971f3af-a1cf-4423-b776-bb662957878c" (UID: "a971f3af-a1cf-4423-b776-bb662957878c"). InnerVolumeSpecName "kube-api-access-5smnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.108520 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.118007 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.160887 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5smnd\" (UniqueName: \"kubernetes.io/projected/a971f3af-a1cf-4423-b776-bb662957878c-kube-api-access-5smnd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.262347 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvdlg\" (UniqueName: \"kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg\") pod \"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51\" (UID: \"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51\") " Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.262518 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rsmj\" (UniqueName: \"kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj\") pod \"65244b6b-8bbf-47a0-b0e8-568a6fdba17d\" (UID: \"65244b6b-8bbf-47a0-b0e8-568a6fdba17d\") " Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.275375 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg" (OuterVolumeSpecName: "kube-api-access-zvdlg") pod "346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" (UID: "346dd65c-ac64-4c3b-b9d0-6f4d16e19f51"). InnerVolumeSpecName "kube-api-access-zvdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.275467 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj" (OuterVolumeSpecName: "kube-api-access-7rsmj") pod "65244b6b-8bbf-47a0-b0e8-568a6fdba17d" (UID: "65244b6b-8bbf-47a0-b0e8-568a6fdba17d"). InnerVolumeSpecName "kube-api-access-7rsmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.338473 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.364826 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvdlg\" (UniqueName: \"kubernetes.io/projected/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51-kube-api-access-zvdlg\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.364874 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rsmj\" (UniqueName: \"kubernetes.io/projected/65244b6b-8bbf-47a0-b0e8-568a6fdba17d-kube-api-access-7rsmj\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.396719 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh44r" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.397512 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh44r" event={"ID":"a971f3af-a1cf-4423-b776-bb662957878c","Type":"ContainerDied","Data":"87a64ada6814fddb34be6dc0b455176a76a41a77e8797a73b98b79c694347d57"} Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.397565 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87a64ada6814fddb34be6dc0b455176a76a41a77e8797a73b98b79c694347d57" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.412278 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerStarted","Data":"daeebef8c080b478f1181c61908f483e0c221caaf04d7dc8f94b30a452207ffa"} Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.418960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z29vv" event={"ID":"346dd65c-ac64-4c3b-b9d0-6f4d16e19f51","Type":"ContainerDied","Data":"e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070"} Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.419004 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e963824080a557f276ee3373f896be47e258feaa18ad71f1cd816d7b8f962070" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.419062 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z29vv" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.432953 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kqnsq" event={"ID":"65244b6b-8bbf-47a0-b0e8-568a6fdba17d","Type":"ContainerDied","Data":"8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577"} Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.432999 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b73d4bfcffd078730bc13bda7a448ec1c2f8d93ba9986ae0bb1255beebeb577" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.433367 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kqnsq" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.442401 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" event={"ID":"0ebb6498-3228-4baa-984b-d76104094326","Type":"ContainerStarted","Data":"09a96a2de283cd00b5d157973c2eb63d7343c811a7e5236da83eb4fe120bca13"} Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.445308 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.494229 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:11 crc kubenswrapper[5024]: I1007 12:47:11.494574 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" podStartSLOduration=2.4945646630000002 podStartE2EDuration="2.494564663s" podCreationTimestamp="2025-10-07 12:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:11.483802347 +0000 UTC m=+1169.559589185" watchObservedRunningTime="2025-10-07 12:47:11.494564663 +0000 UTC m=+1169.570351501" Oct 07 12:47:12 crc kubenswrapper[5024]: I1007 12:47:12.478681 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerStarted","Data":"870351ed79221120352708b9048a40d6c7be3a6ec8964e4ef40258db241b9abc"} Oct 07 12:47:12 crc kubenswrapper[5024]: I1007 12:47:12.484402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerStarted","Data":"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c"} Oct 07 12:47:12 crc kubenswrapper[5024]: I1007 12:47:12.493356 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerStarted","Data":"49aae2e11bf5119536fb1f4681dbc67810b378d26580fa7677293db8025e732d"} Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.425526 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cf6bc68f7-lxxqd" Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.520903 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.526798 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d49fff64-vcdq6" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-api" containerID="cri-o://8b52dd8f3bce230e3160d5b6531bf5449ee13d8c795a0481097282af26e99997" gracePeriod=30 Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.527015 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerStarted","Data":"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3"} Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.527213 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api-log" containerID="cri-o://c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" gracePeriod=30 Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.527255 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d49fff64-vcdq6" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-httpd" containerID="cri-o://d5ac49ce16c99b4b2fcc087c071e08c98b088775affe0d55680769cc89849ab1" gracePeriod=30 Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.527332 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.527372 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api" containerID="cri-o://e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" gracePeriod=30 Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.543837 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerStarted","Data":"693772bc036ec89c2c0728d5f05b6fca68f30e5992e57b416b4a729538a9ad6d"} Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.545727 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerStarted","Data":"13f2e05faf495ec8d6dceb7decb2fb1508f0b4f480bf35a8be23b7ca4ee1890a"} Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.555505 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.555473394 podStartE2EDuration="4.555473394s" podCreationTimestamp="2025-10-07 12:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:13.553505496 +0000 UTC m=+1171.629292334" watchObservedRunningTime="2025-10-07 12:47:13.555473394 +0000 UTC m=+1171.631260232" Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.584341 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.706438202 podStartE2EDuration="4.58432402s" podCreationTimestamp="2025-10-07 12:47:09 +0000 UTC" firstStartedPulling="2025-10-07 12:47:10.23276629 +0000 UTC m=+1168.308553128" lastFinishedPulling="2025-10-07 12:47:11.110652108 +0000 UTC m=+1169.186438946" observedRunningTime="2025-10-07 12:47:13.575267185 +0000 UTC m=+1171.651054023" watchObservedRunningTime="2025-10-07 12:47:13.58432402 +0000 UTC m=+1171.660110858" Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.720637 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:47:13 crc kubenswrapper[5024]: I1007 12:47:13.720708 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.148193 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.246991 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247382 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247725 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247755 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqsvw\" (UniqueName: \"kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247803 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247841 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.247891 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle\") pod \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\" (UID: \"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e\") " Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.248154 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs" (OuterVolumeSpecName: "logs") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.248560 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.248602 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.256683 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.256715 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw" (OuterVolumeSpecName: "kube-api-access-xqsvw") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "kube-api-access-xqsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.256821 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts" (OuterVolumeSpecName: "scripts") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.281027 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.294086 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data" (OuterVolumeSpecName: "config-data") pod "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" (UID: "7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.350506 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.350734 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.350808 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.350881 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqsvw\" (UniqueName: \"kubernetes.io/projected/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-kube-api-access-xqsvw\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.350977 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.351046 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557417 5024 generic.go:334] "Generic (PLEG): container finished" podID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerID="e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" exitCode=0 Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557448 5024 generic.go:334] "Generic (PLEG): container finished" podID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerID="c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" exitCode=143 Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557493 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerDied","Data":"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3"} Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerDied","Data":"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c"} Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557540 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e","Type":"ContainerDied","Data":"22ef469453fc52481360ed4f4dbaaeb59ac30d564770efb248644214fe66c43a"} Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.557554 5024 scope.go:117] "RemoveContainer" containerID="e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.560415 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerStarted","Data":"8b5278e61e9b259279c8572d2c98076cdb5a74b90081e441968fba63ced17b7c"} Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.561218 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.563355 5024 generic.go:334] "Generic (PLEG): container finished" podID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerID="d5ac49ce16c99b4b2fcc087c071e08c98b088775affe0d55680769cc89849ab1" exitCode=0 Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.563978 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerDied","Data":"d5ac49ce16c99b4b2fcc087c071e08c98b088775affe0d55680769cc89849ab1"} Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.584782 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.442355278 podStartE2EDuration="6.584766854s" podCreationTimestamp="2025-10-07 12:47:08 +0000 UTC" firstStartedPulling="2025-10-07 12:47:09.691198189 +0000 UTC m=+1167.766985027" lastFinishedPulling="2025-10-07 12:47:13.833609765 +0000 UTC m=+1171.909396603" observedRunningTime="2025-10-07 12:47:14.584069673 +0000 UTC m=+1172.659856511" watchObservedRunningTime="2025-10-07 12:47:14.584766854 +0000 UTC m=+1172.660553692" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.600908 5024 scope.go:117] "RemoveContainer" containerID="c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.606595 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.610895 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.621948 5024 scope.go:117] "RemoveContainer" containerID="e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.622637 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3\": container with ID starting with e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3 not found: ID does not exist" containerID="e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.622705 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3"} err="failed to get container status \"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3\": rpc error: code = NotFound desc = could not find container \"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3\": container with ID starting with e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3 not found: ID does not exist" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.622742 5024 scope.go:117] "RemoveContainer" containerID="c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.623300 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c\": container with ID starting with c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c not found: ID does not exist" containerID="c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.623340 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c"} err="failed to get container status \"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c\": rpc error: code = NotFound desc = could not find container \"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c\": container with ID starting with c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c not found: ID does not exist" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.623369 5024 scope.go:117] "RemoveContainer" containerID="e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.625117 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3"} err="failed to get container status \"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3\": rpc error: code = NotFound desc = could not find container \"e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3\": container with ID starting with e8a2e3900eef5b75457fba9293040efddc2cae0be6bc6442c711877157ce2aa3 not found: ID does not exist" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.625208 5024 scope.go:117] "RemoveContainer" containerID="c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.625567 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c"} err="failed to get container status \"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c\": rpc error: code = NotFound desc = could not find container \"c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c\": container with ID starting with c5bac27488703fe7d6398ca4af2ce39909a71e49e3af7dc8a7a16389e27f400c not found: ID does not exist" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.638447 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.640757 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.641268 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65244b6b-8bbf-47a0-b0e8-568a6fdba17d" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641283 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="65244b6b-8bbf-47a0-b0e8-568a6fdba17d" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.641298 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a971f3af-a1cf-4423-b776-bb662957878c" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641306 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a971f3af-a1cf-4423-b776-bb662957878c" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.641325 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641334 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.641356 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api-log" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641363 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api-log" Oct 07 12:47:14 crc kubenswrapper[5024]: E1007 12:47:14.641379 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641386 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641646 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api-log" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641673 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641686 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" containerName="cinder-api" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641700 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a971f3af-a1cf-4423-b776-bb662957878c" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.641713 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="65244b6b-8bbf-47a0-b0e8-568a6fdba17d" containerName="mariadb-database-create" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.642958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.645022 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.645197 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.645366 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.647191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757467 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data-custom\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757541 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-scripts\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757565 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55dc71d9-1e63-48ce-a93b-c540ec901312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757600 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6qc\" (UniqueName: \"kubernetes.io/projected/55dc71d9-1e63-48ce-a93b-c540ec901312-kube-api-access-7m6qc\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc71d9-1e63-48ce-a93b-c540ec901312-logs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757663 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757686 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757700 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.757751 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.779805 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e" path="/var/lib/kubelet/pods/7e6dfe73-cc16-4586-953c-ebe0a1c8ed8e/volumes" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.859858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860287 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860391 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860424 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data-custom\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-scripts\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860504 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55dc71d9-1e63-48ce-a93b-c540ec901312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860547 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6qc\" (UniqueName: \"kubernetes.io/projected/55dc71d9-1e63-48ce-a93b-c540ec901312-kube-api-access-7m6qc\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc71d9-1e63-48ce-a93b-c540ec901312-logs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860802 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55dc71d9-1e63-48ce-a93b-c540ec901312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.860972 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc71d9-1e63-48ce-a93b-c540ec901312-logs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.867325 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.867350 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data-custom\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.868222 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-config-data\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.868817 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.870404 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.877430 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55dc71d9-1e63-48ce-a93b-c540ec901312-scripts\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.880201 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6qc\" (UniqueName: \"kubernetes.io/projected/55dc71d9-1e63-48ce-a93b-c540ec901312-kube-api-access-7m6qc\") pod \"cinder-api-0\" (UID: \"55dc71d9-1e63-48ce-a93b-c540ec901312\") " pod="openstack/cinder-api-0" Oct 07 12:47:14 crc kubenswrapper[5024]: I1007 12:47:14.959185 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.402300 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 07 12:47:15 crc kubenswrapper[5024]: W1007 12:47:15.416537 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dc71d9_1e63_48ce_a93b_c540ec901312.slice/crio-fcc9e93da1900bac5b844824d011acc3703ca9f5803d97f8d6e06a9c6bb952d4 WatchSource:0}: Error finding container fcc9e93da1900bac5b844824d011acc3703ca9f5803d97f8d6e06a9c6bb952d4: Status 404 returned error can't find the container with id fcc9e93da1900bac5b844824d011acc3703ca9f5803d97f8d6e06a9c6bb952d4 Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.577834 5024 generic.go:334] "Generic (PLEG): container finished" podID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerID="8b52dd8f3bce230e3160d5b6531bf5449ee13d8c795a0481097282af26e99997" exitCode=0 Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.577895 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerDied","Data":"8b52dd8f3bce230e3160d5b6531bf5449ee13d8c795a0481097282af26e99997"} Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.586367 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55dc71d9-1e63-48ce-a93b-c540ec901312","Type":"ContainerStarted","Data":"fcc9e93da1900bac5b844824d011acc3703ca9f5803d97f8d6e06a9c6bb952d4"} Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.735304 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.890838 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config\") pod \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.891215 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs\") pod \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.891272 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8j6n\" (UniqueName: \"kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n\") pod \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.891337 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config\") pod \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.891417 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle\") pod \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\" (UID: \"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3\") " Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.897938 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n" (OuterVolumeSpecName: "kube-api-access-n8j6n") pod "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" (UID: "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3"). InnerVolumeSpecName "kube-api-access-n8j6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.904326 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" (UID: "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.953824 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config" (OuterVolumeSpecName: "config") pod "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" (UID: "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.961816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" (UID: "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.984420 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" (UID: "79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.995294 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.995357 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.995372 5024 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.995411 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8j6n\" (UniqueName: \"kubernetes.io/projected/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-kube-api-access-n8j6n\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:15 crc kubenswrapper[5024]: I1007 12:47:15.995429 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.633017 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d49fff64-vcdq6" event={"ID":"79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3","Type":"ContainerDied","Data":"7a454853960f2013b6f2a07ef479f205ced26393950cc40ef10142b05db6639f"} Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.634165 5024 scope.go:117] "RemoveContainer" containerID="d5ac49ce16c99b4b2fcc087c071e08c98b088775affe0d55680769cc89849ab1" Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.634715 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d49fff64-vcdq6" Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.640045 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55dc71d9-1e63-48ce-a93b-c540ec901312","Type":"ContainerStarted","Data":"9578f2561f20f992e4a4eb233933ef324365d76c54f02f21f134312a5b59eacf"} Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.660336 5024 scope.go:117] "RemoveContainer" containerID="8b52dd8f3bce230e3160d5b6531bf5449ee13d8c795a0481097282af26e99997" Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.684115 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.700367 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56d49fff64-vcdq6"] Oct 07 12:47:16 crc kubenswrapper[5024]: I1007 12:47:16.768083 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" path="/var/lib/kubelet/pods/79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3/volumes" Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.654265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55dc71d9-1e63-48ce-a93b-c540ec901312","Type":"ContainerStarted","Data":"80d85f2888aa1e2671be128e6a9f54b161c48632c36db150d8801ccd9410bd43"} Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.654735 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.685260 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6852364 podStartE2EDuration="3.6852364s" podCreationTimestamp="2025-10-07 12:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:17.675381431 +0000 UTC m=+1175.751168269" watchObservedRunningTime="2025-10-07 12:47:17.6852364 +0000 UTC m=+1175.761023268" Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.978521 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.979034 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-central-agent" containerID="cri-o://daeebef8c080b478f1181c61908f483e0c221caaf04d7dc8f94b30a452207ffa" gracePeriod=30 Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.979511 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="proxy-httpd" containerID="cri-o://8b5278e61e9b259279c8572d2c98076cdb5a74b90081e441968fba63ced17b7c" gracePeriod=30 Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.979650 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="sg-core" containerID="cri-o://693772bc036ec89c2c0728d5f05b6fca68f30e5992e57b416b4a729538a9ad6d" gracePeriod=30 Oct 07 12:47:17 crc kubenswrapper[5024]: I1007 12:47:17.979778 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-notification-agent" containerID="cri-o://49aae2e11bf5119536fb1f4681dbc67810b378d26580fa7677293db8025e732d" gracePeriod=30 Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672148 5024 generic.go:334] "Generic (PLEG): container finished" podID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerID="8b5278e61e9b259279c8572d2c98076cdb5a74b90081e441968fba63ced17b7c" exitCode=0 Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672636 5024 generic.go:334] "Generic (PLEG): container finished" podID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerID="693772bc036ec89c2c0728d5f05b6fca68f30e5992e57b416b4a729538a9ad6d" exitCode=2 Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672650 5024 generic.go:334] "Generic (PLEG): container finished" podID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerID="49aae2e11bf5119536fb1f4681dbc67810b378d26580fa7677293db8025e732d" exitCode=0 Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672660 5024 generic.go:334] "Generic (PLEG): container finished" podID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerID="daeebef8c080b478f1181c61908f483e0c221caaf04d7dc8f94b30a452207ffa" exitCode=0 Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672226 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerDied","Data":"8b5278e61e9b259279c8572d2c98076cdb5a74b90081e441968fba63ced17b7c"} Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerDied","Data":"693772bc036ec89c2c0728d5f05b6fca68f30e5992e57b416b4a729538a9ad6d"} Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672775 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerDied","Data":"49aae2e11bf5119536fb1f4681dbc67810b378d26580fa7677293db8025e732d"} Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.672786 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerDied","Data":"daeebef8c080b478f1181c61908f483e0c221caaf04d7dc8f94b30a452207ffa"} Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.780756 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.946857 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947110 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947308 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947367 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947473 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947524 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqnpq\" (UniqueName: \"kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.947593 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data\") pod \"5365f54a-61b6-482e-9fc7-5dcd171715b9\" (UID: \"5365f54a-61b6-482e-9fc7-5dcd171715b9\") " Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.949642 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.949729 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.955100 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq" (OuterVolumeSpecName: "kube-api-access-pqnpq") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "kube-api-access-pqnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.957252 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts" (OuterVolumeSpecName: "scripts") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:18 crc kubenswrapper[5024]: I1007 12:47:18.977655 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.022961 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049678 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049716 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049726 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049735 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5365f54a-61b6-482e-9fc7-5dcd171715b9-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049743 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.049751 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqnpq\" (UniqueName: \"kubernetes.io/projected/5365f54a-61b6-482e-9fc7-5dcd171715b9-kube-api-access-pqnpq\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.057741 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data" (OuterVolumeSpecName: "config-data") pod "5365f54a-61b6-482e-9fc7-5dcd171715b9" (UID: "5365f54a-61b6-482e-9fc7-5dcd171715b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.150961 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5365f54a-61b6-482e-9fc7-5dcd171715b9-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.523126 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.568115 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.568416 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="dnsmasq-dns" containerID="cri-o://583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a" gracePeriod=10 Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.688844 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5365f54a-61b6-482e-9fc7-5dcd171715b9","Type":"ContainerDied","Data":"8c8458af6e69160d15ba07e19e9fad587c536841572cccbef1a1c053f72ab0b2"} Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.688913 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.689120 5024 scope.go:117] "RemoveContainer" containerID="8b5278e61e9b259279c8572d2c98076cdb5a74b90081e441968fba63ced17b7c" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.749873 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.763213 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.767520 5024 scope.go:117] "RemoveContainer" containerID="693772bc036ec89c2c0728d5f05b6fca68f30e5992e57b416b4a729538a9ad6d" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791394 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.791889 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-api" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791908 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-api" Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.791916 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-central-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791924 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-central-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.791944 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791950 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.791962 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="proxy-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791969 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="proxy-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.791982 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-notification-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.791988 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-notification-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: E1007 12:47:19.792002 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="sg-core" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792010 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="sg-core" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792220 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-api" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792233 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-notification-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792244 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="proxy-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792259 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c7d81c-3cc1-4dff-9cbb-99cb1e121cb3" containerName="neutron-httpd" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792268 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="ceilometer-central-agent" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.792279 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" containerName="sg-core" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.793926 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.798597 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.798806 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.801267 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.848952 5024 scope.go:117] "RemoveContainer" containerID="49aae2e11bf5119536fb1f4681dbc67810b378d26580fa7677293db8025e732d" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.891864 5024 scope.go:117] "RemoveContainer" containerID="daeebef8c080b478f1181c61908f483e0c221caaf04d7dc8f94b30a452207ffa" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964242 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964339 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnwg\" (UniqueName: \"kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964365 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964408 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964545 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.964824 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:19 crc kubenswrapper[5024]: I1007 12:47:19.970194 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.027335 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.065940 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnwg\" (UniqueName: \"kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.065992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066054 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066118 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066269 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066301 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066374 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.066815 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.067367 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.073980 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.078550 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.079032 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.089784 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.094110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnwg\" (UniqueName: \"kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg\") pod \"ceilometer-0\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.122161 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.207666 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.373222 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb\") pod \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.373477 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config\") pod \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.373518 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb\") pod \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.373743 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxlc\" (UniqueName: \"kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc\") pod \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.373798 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc\") pod \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\" (UID: \"3cbb1818-3bab-4718-a7d0-5d75056b4c46\") " Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.378533 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc" (OuterVolumeSpecName: "kube-api-access-rfxlc") pod "3cbb1818-3bab-4718-a7d0-5d75056b4c46" (UID: "3cbb1818-3bab-4718-a7d0-5d75056b4c46"). InnerVolumeSpecName "kube-api-access-rfxlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.434236 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config" (OuterVolumeSpecName: "config") pod "3cbb1818-3bab-4718-a7d0-5d75056b4c46" (UID: "3cbb1818-3bab-4718-a7d0-5d75056b4c46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.435099 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cbb1818-3bab-4718-a7d0-5d75056b4c46" (UID: "3cbb1818-3bab-4718-a7d0-5d75056b4c46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.438595 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3cbb1818-3bab-4718-a7d0-5d75056b4c46" (UID: "3cbb1818-3bab-4718-a7d0-5d75056b4c46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.446410 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3cbb1818-3bab-4718-a7d0-5d75056b4c46" (UID: "3cbb1818-3bab-4718-a7d0-5d75056b4c46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.479126 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.479178 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.479190 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.479199 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxlc\" (UniqueName: \"kubernetes.io/projected/3cbb1818-3bab-4718-a7d0-5d75056b4c46-kube-api-access-rfxlc\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.479211 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cbb1818-3bab-4718-a7d0-5d75056b4c46-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.590592 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:20 crc kubenswrapper[5024]: W1007 12:47:20.600058 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3b2b216_2cc1_4826_a553_d556296be9f6.slice/crio-04ecf8512345d343e2c7d82fec1a2b1e3b59161291c902aed561309531bb3d7a WatchSource:0}: Error finding container 04ecf8512345d343e2c7d82fec1a2b1e3b59161291c902aed561309531bb3d7a: Status 404 returned error can't find the container with id 04ecf8512345d343e2c7d82fec1a2b1e3b59161291c902aed561309531bb3d7a Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.700658 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerStarted","Data":"04ecf8512345d343e2c7d82fec1a2b1e3b59161291c902aed561309531bb3d7a"} Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703577 5024 generic.go:334] "Generic (PLEG): container finished" podID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerID="583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a" exitCode=0 Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703661 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" event={"ID":"3cbb1818-3bab-4718-a7d0-5d75056b4c46","Type":"ContainerDied","Data":"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a"} Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703700 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" event={"ID":"3cbb1818-3bab-4718-a7d0-5d75056b4c46","Type":"ContainerDied","Data":"8759f572d4a6a43662725a8d25213fc8979efda50f6ca85b672198f740c68eda"} Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703722 5024 scope.go:117] "RemoveContainer" containerID="583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703764 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="cinder-scheduler" containerID="cri-o://870351ed79221120352708b9048a40d6c7be3a6ec8964e4ef40258db241b9abc" gracePeriod=30 Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.703846 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="probe" containerID="cri-o://13f2e05faf495ec8d6dceb7decb2fb1508f0b4f480bf35a8be23b7ca4ee1890a" gracePeriod=30 Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.704005 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-wsqxr" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.731164 5024 scope.go:117] "RemoveContainer" containerID="757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.747217 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.770972 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5365f54a-61b6-482e-9fc7-5dcd171715b9" path="/var/lib/kubelet/pods/5365f54a-61b6-482e-9fc7-5dcd171715b9/volumes" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.772657 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-wsqxr"] Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.777467 5024 scope.go:117] "RemoveContainer" containerID="583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a" Oct 07 12:47:20 crc kubenswrapper[5024]: E1007 12:47:20.778549 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a\": container with ID starting with 583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a not found: ID does not exist" containerID="583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.778584 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a"} err="failed to get container status \"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a\": rpc error: code = NotFound desc = could not find container \"583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a\": container with ID starting with 583832e45e6cab5aebe68e71514661333f675b4793e31bbb1fb7f5cfb8abc86a not found: ID does not exist" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.778606 5024 scope.go:117] "RemoveContainer" containerID="757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41" Oct 07 12:47:20 crc kubenswrapper[5024]: E1007 12:47:20.778801 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41\": container with ID starting with 757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41 not found: ID does not exist" containerID="757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41" Oct 07 12:47:20 crc kubenswrapper[5024]: I1007 12:47:20.778823 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41"} err="failed to get container status \"757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41\": rpc error: code = NotFound desc = could not find container \"757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41\": container with ID starting with 757058021e4617820eb9fcf400dbf89cf0acf0ef4c702dd247bed1fa74d97a41 not found: ID does not exist" Oct 07 12:47:21 crc kubenswrapper[5024]: I1007 12:47:21.356396 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:21 crc kubenswrapper[5024]: I1007 12:47:21.718484 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerID="13f2e05faf495ec8d6dceb7decb2fb1508f0b4f480bf35a8be23b7ca4ee1890a" exitCode=0 Oct 07 12:47:21 crc kubenswrapper[5024]: I1007 12:47:21.719688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerDied","Data":"13f2e05faf495ec8d6dceb7decb2fb1508f0b4f480bf35a8be23b7ca4ee1890a"} Oct 07 12:47:22 crc kubenswrapper[5024]: I1007 12:47:22.732747 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerStarted","Data":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} Oct 07 12:47:22 crc kubenswrapper[5024]: I1007 12:47:22.766913 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" path="/var/lib/kubelet/pods/3cbb1818-3bab-4718-a7d0-5d75056b4c46/volumes" Oct 07 12:47:23 crc kubenswrapper[5024]: I1007 12:47:23.744217 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerID="870351ed79221120352708b9048a40d6c7be3a6ec8964e4ef40258db241b9abc" exitCode=0 Oct 07 12:47:23 crc kubenswrapper[5024]: I1007 12:47:23.744297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerDied","Data":"870351ed79221120352708b9048a40d6c7be3a6ec8964e4ef40258db241b9abc"} Oct 07 12:47:23 crc kubenswrapper[5024]: I1007 12:47:23.746869 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerStarted","Data":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} Oct 07 12:47:23 crc kubenswrapper[5024]: I1007 12:47:23.998585 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187248 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187349 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187386 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187455 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187525 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp6dq\" (UniqueName: \"kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq\") pod \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\" (UID: \"5b32ce35-3301-4125-9a0f-b6fe6993dddb\") " Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.187963 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.188234 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b32ce35-3301-4125-9a0f-b6fe6993dddb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.192427 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq" (OuterVolumeSpecName: "kube-api-access-hp6dq") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "kube-api-access-hp6dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.195511 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.196568 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts" (OuterVolumeSpecName: "scripts") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.242193 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.280304 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data" (OuterVolumeSpecName: "config-data") pod "5b32ce35-3301-4125-9a0f-b6fe6993dddb" (UID: "5b32ce35-3301-4125-9a0f-b6fe6993dddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.291248 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.291291 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.291308 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp6dq\" (UniqueName: \"kubernetes.io/projected/5b32ce35-3301-4125-9a0f-b6fe6993dddb-kube-api-access-hp6dq\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.291321 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.291445 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b32ce35-3301-4125-9a0f-b6fe6993dddb-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.808914 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9395-account-create-cnnqv"] Oct 07 12:47:24 crc kubenswrapper[5024]: E1007 12:47:24.809549 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="init" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809564 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="init" Oct 07 12:47:24 crc kubenswrapper[5024]: E1007 12:47:24.809591 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="probe" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809600 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="probe" Oct 07 12:47:24 crc kubenswrapper[5024]: E1007 12:47:24.809618 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="cinder-scheduler" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809627 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="cinder-scheduler" Oct 07 12:47:24 crc kubenswrapper[5024]: E1007 12:47:24.809645 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="dnsmasq-dns" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809654 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="dnsmasq-dns" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809850 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbb1818-3bab-4718-a7d0-5d75056b4c46" containerName="dnsmasq-dns" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809878 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="cinder-scheduler" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.809898 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" containerName="probe" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.810499 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9395-account-create-cnnqv"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.810585 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.813577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerStarted","Data":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.817394 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.833393 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b32ce35-3301-4125-9a0f-b6fe6993dddb","Type":"ContainerDied","Data":"c329a7ba7198a73d1efc922fa2dce2d67291048e1cd84874848039e4edb90478"} Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.833441 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.833480 5024 scope.go:117] "RemoveContainer" containerID="13f2e05faf495ec8d6dceb7decb2fb1508f0b4f480bf35a8be23b7ca4ee1890a" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.865390 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.871769 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.882316 5024 scope.go:117] "RemoveContainer" containerID="870351ed79221120352708b9048a40d6c7be3a6ec8964e4ef40258db241b9abc" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.888276 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.889616 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.892535 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.899880 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.983522 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7a97-account-create-jpmgr"] Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.984676 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.989215 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 07 12:47:24 crc kubenswrapper[5024]: I1007 12:47:24.991987 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7a97-account-create-jpmgr"] Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005307 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlxg\" (UniqueName: \"kubernetes.io/projected/8b2c18a7-32a0-4768-b60c-0d01da478a8a-kube-api-access-wrlxg\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b2c18a7-32a0-4768-b60c-0d01da478a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005432 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005459 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj7t\" (UniqueName: \"kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t\") pod \"nova-api-9395-account-create-cnnqv\" (UID: \"78ceed30-6beb-456f-9ecd-9edf82215f20\") " pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005488 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005525 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.005547 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.107912 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108336 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108428 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlxg\" (UniqueName: \"kubernetes.io/projected/8b2c18a7-32a0-4768-b60c-0d01da478a8a-kube-api-access-wrlxg\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b2c18a7-32a0-4768-b60c-0d01da478a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108531 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9xk\" (UniqueName: \"kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk\") pod \"nova-cell0-7a97-account-create-jpmgr\" (UID: \"5d5e7b20-6b89-434d-b940-530830f73fcb\") " pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108689 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgj7t\" (UniqueName: \"kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t\") pod \"nova-api-9395-account-create-cnnqv\" (UID: \"78ceed30-6beb-456f-9ecd-9edf82215f20\") " pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.108923 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b2c18a7-32a0-4768-b60c-0d01da478a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.114048 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.115489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.116192 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.116678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2c18a7-32a0-4768-b60c-0d01da478a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.126063 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgj7t\" (UniqueName: \"kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t\") pod \"nova-api-9395-account-create-cnnqv\" (UID: \"78ceed30-6beb-456f-9ecd-9edf82215f20\") " pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.126365 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlxg\" (UniqueName: \"kubernetes.io/projected/8b2c18a7-32a0-4768-b60c-0d01da478a8a-kube-api-access-wrlxg\") pod \"cinder-scheduler-0\" (UID: \"8b2c18a7-32a0-4768-b60c-0d01da478a8a\") " pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.148019 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.183199 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7c29-account-create-bmsrg"] Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.184237 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.196917 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7c29-account-create-bmsrg"] Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.231371 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.231470 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.232646 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9xk\" (UniqueName: \"kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk\") pod \"nova-cell0-7a97-account-create-jpmgr\" (UID: \"5d5e7b20-6b89-434d-b940-530830f73fcb\") " pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.233066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm\") pod \"nova-cell1-7c29-account-create-bmsrg\" (UID: \"e12621fa-274f-4792-8110-c51afe64bed0\") " pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.265249 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9xk\" (UniqueName: \"kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk\") pod \"nova-cell0-7a97-account-create-jpmgr\" (UID: \"5d5e7b20-6b89-434d-b940-530830f73fcb\") " pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.305372 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.335060 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm\") pod \"nova-cell1-7c29-account-create-bmsrg\" (UID: \"e12621fa-274f-4792-8110-c51afe64bed0\") " pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.360564 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm\") pod \"nova-cell1-7c29-account-create-bmsrg\" (UID: \"e12621fa-274f-4792-8110-c51afe64bed0\") " pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:25 crc kubenswrapper[5024]: I1007 12:47:25.648761 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:26 crc kubenswrapper[5024]: W1007 12:47:26.233357 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ceed30_6beb_456f_9ecd_9edf82215f20.slice/crio-e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74 WatchSource:0}: Error finding container e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74: Status 404 returned error can't find the container with id e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.233673 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9395-account-create-cnnqv"] Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.315308 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7a97-account-create-jpmgr"] Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.335740 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 07 12:47:26 crc kubenswrapper[5024]: W1007 12:47:26.337266 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2c18a7_32a0_4768_b60c_0d01da478a8a.slice/crio-4e126a185d95b1fcecc7ad5fe066ec0b6c90f75a5a138452c27c7a1f758cadd3 WatchSource:0}: Error finding container 4e126a185d95b1fcecc7ad5fe066ec0b6c90f75a5a138452c27c7a1f758cadd3: Status 404 returned error can't find the container with id 4e126a185d95b1fcecc7ad5fe066ec0b6c90f75a5a138452c27c7a1f758cadd3 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.344413 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7c29-account-create-bmsrg"] Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.764718 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b32ce35-3301-4125-9a0f-b6fe6993dddb" path="/var/lib/kubelet/pods/5b32ce35-3301-4125-9a0f-b6fe6993dddb/volumes" Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.856253 5024 generic.go:334] "Generic (PLEG): container finished" podID="5d5e7b20-6b89-434d-b940-530830f73fcb" containerID="61da99d9a61e9a7845a12b3e29423ed7db3acb8777e2f305ca3315477f60d606" exitCode=0 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.856310 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a97-account-create-jpmgr" event={"ID":"5d5e7b20-6b89-434d-b940-530830f73fcb","Type":"ContainerDied","Data":"61da99d9a61e9a7845a12b3e29423ed7db3acb8777e2f305ca3315477f60d606"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.856369 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a97-account-create-jpmgr" event={"ID":"5d5e7b20-6b89-434d-b940-530830f73fcb","Type":"ContainerStarted","Data":"1534c0bc46bc86626f2da93546bdfc6d6cab60565fa4c300a35f239034e7e2a1"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.859117 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9395-account-create-cnnqv" event={"ID":"78ceed30-6beb-456f-9ecd-9edf82215f20","Type":"ContainerStarted","Data":"45f0aedce3f4d5704211792e1b7a7481ba4eb5f10caf8f32bb151ca1a115b9a3"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.859186 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9395-account-create-cnnqv" event={"ID":"78ceed30-6beb-456f-9ecd-9edf82215f20","Type":"ContainerStarted","Data":"e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.860737 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b2c18a7-32a0-4768-b60c-0d01da478a8a","Type":"ContainerStarted","Data":"4e126a185d95b1fcecc7ad5fe066ec0b6c90f75a5a138452c27c7a1f758cadd3"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.867852 5024 generic.go:334] "Generic (PLEG): container finished" podID="e12621fa-274f-4792-8110-c51afe64bed0" containerID="040fdab56371916c47630541a4a5cf8d36cccbbc046ed9b0a9058603265d86b6" exitCode=0 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.867914 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c29-account-create-bmsrg" event={"ID":"e12621fa-274f-4792-8110-c51afe64bed0","Type":"ContainerDied","Data":"040fdab56371916c47630541a4a5cf8d36cccbbc046ed9b0a9058603265d86b6"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.867941 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c29-account-create-bmsrg" event={"ID":"e12621fa-274f-4792-8110-c51afe64bed0","Type":"ContainerStarted","Data":"e4904c4fb5114ed2b72b405220faed0b96954401186d383bbf958082a8c889b2"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874377 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerStarted","Data":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874628 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874598 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-central-agent" containerID="cri-o://a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" gracePeriod=30 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874721 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="proxy-httpd" containerID="cri-o://6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" gracePeriod=30 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874753 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-notification-agent" containerID="cri-o://abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" gracePeriod=30 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.874715 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="sg-core" containerID="cri-o://b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" gracePeriod=30 Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.901137 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9395-account-create-cnnqv" podStartSLOduration=2.9011202799999998 podStartE2EDuration="2.90112028s" podCreationTimestamp="2025-10-07 12:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:26.900544553 +0000 UTC m=+1184.976331391" watchObservedRunningTime="2025-10-07 12:47:26.90112028 +0000 UTC m=+1184.976907118" Oct 07 12:47:26 crc kubenswrapper[5024]: I1007 12:47:26.922557 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.488122572 podStartE2EDuration="7.922542998s" podCreationTimestamp="2025-10-07 12:47:19 +0000 UTC" firstStartedPulling="2025-10-07 12:47:20.602369123 +0000 UTC m=+1178.678155961" lastFinishedPulling="2025-10-07 12:47:26.036789549 +0000 UTC m=+1184.112576387" observedRunningTime="2025-10-07 12:47:26.922271691 +0000 UTC m=+1184.998058539" watchObservedRunningTime="2025-10-07 12:47:26.922542998 +0000 UTC m=+1184.998329836" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.189605 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.757028 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.776989 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777053 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpnwg\" (UniqueName: \"kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777088 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777173 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777273 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.777330 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data\") pod \"e3b2b216-2cc1-4826-a553-d556296be9f6\" (UID: \"e3b2b216-2cc1-4826-a553-d556296be9f6\") " Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.778502 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.778604 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.783432 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg" (OuterVolumeSpecName: "kube-api-access-bpnwg") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "kube-api-access-bpnwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.785497 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts" (OuterVolumeSpecName: "scripts") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.826543 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.854805 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.856601 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data" (OuterVolumeSpecName: "config-data") pod "e3b2b216-2cc1-4826-a553-d556296be9f6" (UID: "e3b2b216-2cc1-4826-a553-d556296be9f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878690 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878714 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b2b216-2cc1-4826-a553-d556296be9f6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878724 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878733 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878745 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpnwg\" (UniqueName: \"kubernetes.io/projected/e3b2b216-2cc1-4826-a553-d556296be9f6-kube-api-access-bpnwg\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878753 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.878765 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b2b216-2cc1-4826-a553-d556296be9f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.887216 5024 generic.go:334] "Generic (PLEG): container finished" podID="78ceed30-6beb-456f-9ecd-9edf82215f20" containerID="45f0aedce3f4d5704211792e1b7a7481ba4eb5f10caf8f32bb151ca1a115b9a3" exitCode=0 Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.887316 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9395-account-create-cnnqv" event={"ID":"78ceed30-6beb-456f-9ecd-9edf82215f20","Type":"ContainerDied","Data":"45f0aedce3f4d5704211792e1b7a7481ba4eb5f10caf8f32bb151ca1a115b9a3"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.890660 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b2c18a7-32a0-4768-b60c-0d01da478a8a","Type":"ContainerStarted","Data":"92e8d6a229674b97f87bed80550dcccbd488d0547db76232b490c8a35e915f8b"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.890709 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b2c18a7-32a0-4768-b60c-0d01da478a8a","Type":"ContainerStarted","Data":"449eea50003204f78b3d8bcbc0f733dfc30fd6156358a25f0475c1f0215487e2"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.903494 5024 generic.go:334] "Generic (PLEG): container finished" podID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" exitCode=0 Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.903543 5024 generic.go:334] "Generic (PLEG): container finished" podID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" exitCode=2 Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.903756 5024 generic.go:334] "Generic (PLEG): container finished" podID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" exitCode=0 Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.903770 5024 generic.go:334] "Generic (PLEG): container finished" podID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" exitCode=0 Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.904018 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910350 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerDied","Data":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910424 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerDied","Data":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerDied","Data":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910470 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerDied","Data":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b2b216-2cc1-4826-a553-d556296be9f6","Type":"ContainerDied","Data":"04ecf8512345d343e2c7d82fec1a2b1e3b59161291c902aed561309531bb3d7a"} Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.910497 5024 scope.go:117] "RemoveContainer" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.973947 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9739206879999998 podStartE2EDuration="3.973920688s" podCreationTimestamp="2025-10-07 12:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:27.937896741 +0000 UTC m=+1186.013683599" watchObservedRunningTime="2025-10-07 12:47:27.973920688 +0000 UTC m=+1186.049707526" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.975339 5024 scope.go:117] "RemoveContainer" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:27 crc kubenswrapper[5024]: I1007 12:47:27.996548 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.007216 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.014664 5024 scope.go:117] "RemoveContainer" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.024806 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.025251 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-central-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025267 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-central-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.025275 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="proxy-httpd" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025282 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="proxy-httpd" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.025306 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-notification-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025312 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-notification-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.025332 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="sg-core" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025338 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="sg-core" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025482 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="sg-core" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025498 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="proxy-httpd" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025516 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-central-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.025523 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" containerName="ceilometer-notification-agent" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.027209 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.029186 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.029384 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.031375 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082717 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082847 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082943 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.082993 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.083046 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t7w\" (UniqueName: \"kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.093065 5024 scope.go:117] "RemoveContainer" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.141305 5024 scope.go:117] "RemoveContainer" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.142058 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": container with ID starting with 6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799 not found: ID does not exist" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142106 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} err="failed to get container status \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": rpc error: code = NotFound desc = could not find container \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": container with ID starting with 6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142136 5024 scope.go:117] "RemoveContainer" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.142456 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": container with ID starting with b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619 not found: ID does not exist" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142496 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} err="failed to get container status \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": rpc error: code = NotFound desc = could not find container \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": container with ID starting with b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142524 5024 scope.go:117] "RemoveContainer" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.142734 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": container with ID starting with abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321 not found: ID does not exist" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142767 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} err="failed to get container status \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": rpc error: code = NotFound desc = could not find container \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": container with ID starting with abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.142786 5024 scope.go:117] "RemoveContainer" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: E1007 12:47:28.143025 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": container with ID starting with a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f not found: ID does not exist" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143052 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} err="failed to get container status \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": rpc error: code = NotFound desc = could not find container \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": container with ID starting with a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143067 5024 scope.go:117] "RemoveContainer" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143241 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} err="failed to get container status \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": rpc error: code = NotFound desc = could not find container \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": container with ID starting with 6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143264 5024 scope.go:117] "RemoveContainer" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143425 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} err="failed to get container status \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": rpc error: code = NotFound desc = could not find container \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": container with ID starting with b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143443 5024 scope.go:117] "RemoveContainer" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143601 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} err="failed to get container status \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": rpc error: code = NotFound desc = could not find container \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": container with ID starting with abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143619 5024 scope.go:117] "RemoveContainer" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143768 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} err="failed to get container status \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": rpc error: code = NotFound desc = could not find container \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": container with ID starting with a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143785 5024 scope.go:117] "RemoveContainer" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143908 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} err="failed to get container status \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": rpc error: code = NotFound desc = could not find container \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": container with ID starting with 6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.143923 5024 scope.go:117] "RemoveContainer" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144066 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} err="failed to get container status \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": rpc error: code = NotFound desc = could not find container \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": container with ID starting with b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144080 5024 scope.go:117] "RemoveContainer" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144228 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} err="failed to get container status \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": rpc error: code = NotFound desc = could not find container \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": container with ID starting with abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144243 5024 scope.go:117] "RemoveContainer" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144922 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} err="failed to get container status \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": rpc error: code = NotFound desc = could not find container \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": container with ID starting with a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.144949 5024 scope.go:117] "RemoveContainer" containerID="6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.145792 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799"} err="failed to get container status \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": rpc error: code = NotFound desc = could not find container \"6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799\": container with ID starting with 6ecfad20f7a6fa78eb1b3fd1638e7f1ed8dc34c1ee194a8f2183daf4a41fe799 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.145814 5024 scope.go:117] "RemoveContainer" containerID="b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.146049 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619"} err="failed to get container status \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": rpc error: code = NotFound desc = could not find container \"b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619\": container with ID starting with b7ccb4ef147431108cddcaaf015d6120f7db694867da24781316f1d6e9584619 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.146079 5024 scope.go:117] "RemoveContainer" containerID="abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.146280 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321"} err="failed to get container status \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": rpc error: code = NotFound desc = could not find container \"abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321\": container with ID starting with abaa634e1f7013849d8b0d8416d301c29bbc074ce08055721b5e94f8a4e74321 not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.146308 5024 scope.go:117] "RemoveContainer" containerID="a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.146484 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f"} err="failed to get container status \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": rpc error: code = NotFound desc = could not find container \"a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f\": container with ID starting with a7cf271b318d31d5adf576177f459b326bf7af40aad60ba8a33712431689c79f not found: ID does not exist" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185008 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185060 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185115 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t7w\" (UniqueName: \"kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185332 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.185357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.186487 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.194128 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.196530 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.204689 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.205557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.205621 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.224841 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.227889 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t7w\" (UniqueName: \"kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w\") pod \"ceilometer-0\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.356008 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.377877 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.387860 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm\") pod \"e12621fa-274f-4792-8110-c51afe64bed0\" (UID: \"e12621fa-274f-4792-8110-c51afe64bed0\") " Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.391221 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm" (OuterVolumeSpecName: "kube-api-access-h9hnm") pod "e12621fa-274f-4792-8110-c51afe64bed0" (UID: "e12621fa-274f-4792-8110-c51afe64bed0"). InnerVolumeSpecName "kube-api-access-h9hnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.391770 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.489648 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk9xk\" (UniqueName: \"kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk\") pod \"5d5e7b20-6b89-434d-b940-530830f73fcb\" (UID: \"5d5e7b20-6b89-434d-b940-530830f73fcb\") " Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.491553 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9hnm\" (UniqueName: \"kubernetes.io/projected/e12621fa-274f-4792-8110-c51afe64bed0-kube-api-access-h9hnm\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.493032 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk" (OuterVolumeSpecName: "kube-api-access-rk9xk") pod "5d5e7b20-6b89-434d-b940-530830f73fcb" (UID: "5d5e7b20-6b89-434d-b940-530830f73fcb"). InnerVolumeSpecName "kube-api-access-rk9xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.592431 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk9xk\" (UniqueName: \"kubernetes.io/projected/5d5e7b20-6b89-434d-b940-530830f73fcb-kube-api-access-rk9xk\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.762191 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b2b216-2cc1-4826-a553-d556296be9f6" path="/var/lib/kubelet/pods/e3b2b216-2cc1-4826-a553-d556296be9f6/volumes" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.828164 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:28 crc kubenswrapper[5024]: W1007 12:47:28.836477 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e66dd04_a19f_4f24_91d5_cdd73fcbb6b1.slice/crio-48d9f751105b7514218d814765858c1f63317c5606911dac3e982e5500e372bb WatchSource:0}: Error finding container 48d9f751105b7514218d814765858c1f63317c5606911dac3e982e5500e372bb: Status 404 returned error can't find the container with id 48d9f751105b7514218d814765858c1f63317c5606911dac3e982e5500e372bb Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.919749 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerStarted","Data":"48d9f751105b7514218d814765858c1f63317c5606911dac3e982e5500e372bb"} Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.921557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a97-account-create-jpmgr" event={"ID":"5d5e7b20-6b89-434d-b940-530830f73fcb","Type":"ContainerDied","Data":"1534c0bc46bc86626f2da93546bdfc6d6cab60565fa4c300a35f239034e7e2a1"} Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.921587 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1534c0bc46bc86626f2da93546bdfc6d6cab60565fa4c300a35f239034e7e2a1" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.921639 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a97-account-create-jpmgr" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.924546 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7c29-account-create-bmsrg" event={"ID":"e12621fa-274f-4792-8110-c51afe64bed0","Type":"ContainerDied","Data":"e4904c4fb5114ed2b72b405220faed0b96954401186d383bbf958082a8c889b2"} Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.924576 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4904c4fb5114ed2b72b405220faed0b96954401186d383bbf958082a8c889b2" Oct 07 12:47:28 crc kubenswrapper[5024]: I1007 12:47:28.924692 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7c29-account-create-bmsrg" Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.221001 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.407362 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgj7t\" (UniqueName: \"kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t\") pod \"78ceed30-6beb-456f-9ecd-9edf82215f20\" (UID: \"78ceed30-6beb-456f-9ecd-9edf82215f20\") " Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.413288 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t" (OuterVolumeSpecName: "kube-api-access-dgj7t") pod "78ceed30-6beb-456f-9ecd-9edf82215f20" (UID: "78ceed30-6beb-456f-9ecd-9edf82215f20"). InnerVolumeSpecName "kube-api-access-dgj7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.510255 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgj7t\" (UniqueName: \"kubernetes.io/projected/78ceed30-6beb-456f-9ecd-9edf82215f20-kube-api-access-dgj7t\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.936740 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerStarted","Data":"15399e15920024a744428f891f3aa692bcbeb431aedbeb191349d55356927bd0"} Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.939174 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9395-account-create-cnnqv" event={"ID":"78ceed30-6beb-456f-9ecd-9edf82215f20","Type":"ContainerDied","Data":"e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74"} Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.939198 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3eef514439d7a2debf8b8356df61fd7b39ace848564e876ae36fe9451ed9e74" Oct 07 12:47:29 crc kubenswrapper[5024]: I1007 12:47:29.939337 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9395-account-create-cnnqv" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.180797 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fgvjl"] Oct 07 12:47:30 crc kubenswrapper[5024]: E1007 12:47:30.181473 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12621fa-274f-4792-8110-c51afe64bed0" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181490 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12621fa-274f-4792-8110-c51afe64bed0" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: E1007 12:47:30.181506 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ceed30-6beb-456f-9ecd-9edf82215f20" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181512 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ceed30-6beb-456f-9ecd-9edf82215f20" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: E1007 12:47:30.181521 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5e7b20-6b89-434d-b940-530830f73fcb" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181528 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5e7b20-6b89-434d-b940-530830f73fcb" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181680 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12621fa-274f-4792-8110-c51afe64bed0" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181700 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ceed30-6beb-456f-9ecd-9edf82215f20" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.181713 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5e7b20-6b89-434d-b940-530830f73fcb" containerName="mariadb-account-create" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.182226 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.185470 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.185553 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7z4b8" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.185552 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.192277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fgvjl"] Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.232290 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.321930 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.322008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.322067 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.322726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz2jz\" (UniqueName: \"kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.424377 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.424503 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz2jz\" (UniqueName: \"kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.424592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.424708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.428871 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.429126 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.430129 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.441728 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz2jz\" (UniqueName: \"kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz\") pod \"nova-cell0-conductor-db-sync-fgvjl\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.497983 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.955113 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerStarted","Data":"453fe40d93e66e502304a068ad2929ed9786723f39e280bc600514fa3e262642"} Oct 07 12:47:30 crc kubenswrapper[5024]: I1007 12:47:30.957093 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fgvjl"] Oct 07 12:47:31 crc kubenswrapper[5024]: I1007 12:47:31.971359 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerStarted","Data":"75bf7c2ac4950386e8f6946434deed802e8f091d97be02bbee27caa4cbc90568"} Oct 07 12:47:31 crc kubenswrapper[5024]: I1007 12:47:31.973080 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" event={"ID":"2082746b-d351-4905-a3f6-320ff139b534","Type":"ContainerStarted","Data":"cb6005803a96e7d7e6fa8ffe740ba354f5f76c8bf26f6b241e9d1028499d104f"} Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.447975 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.991872 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerStarted","Data":"8ba6d0b4c29eb5b7ff6d79409420d9196cbf0717b566b086afad11ba29572007"} Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.992089 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-central-agent" containerID="cri-o://15399e15920024a744428f891f3aa692bcbeb431aedbeb191349d55356927bd0" gracePeriod=30 Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.992258 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-notification-agent" containerID="cri-o://453fe40d93e66e502304a068ad2929ed9786723f39e280bc600514fa3e262642" gracePeriod=30 Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.992217 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="sg-core" containerID="cri-o://75bf7c2ac4950386e8f6946434deed802e8f091d97be02bbee27caa4cbc90568" gracePeriod=30 Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.992224 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="proxy-httpd" containerID="cri-o://8ba6d0b4c29eb5b7ff6d79409420d9196cbf0717b566b086afad11ba29572007" gracePeriod=30 Oct 07 12:47:33 crc kubenswrapper[5024]: I1007 12:47:33.993547 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:47:34 crc kubenswrapper[5024]: I1007 12:47:34.024415 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.358134073 podStartE2EDuration="7.02439623s" podCreationTimestamp="2025-10-07 12:47:27 +0000 UTC" firstStartedPulling="2025-10-07 12:47:28.838185726 +0000 UTC m=+1186.913972564" lastFinishedPulling="2025-10-07 12:47:33.504447883 +0000 UTC m=+1191.580234721" observedRunningTime="2025-10-07 12:47:34.022723141 +0000 UTC m=+1192.098509979" watchObservedRunningTime="2025-10-07 12:47:34.02439623 +0000 UTC m=+1192.100183068" Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003061 5024 generic.go:334] "Generic (PLEG): container finished" podID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerID="8ba6d0b4c29eb5b7ff6d79409420d9196cbf0717b566b086afad11ba29572007" exitCode=0 Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003097 5024 generic.go:334] "Generic (PLEG): container finished" podID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerID="75bf7c2ac4950386e8f6946434deed802e8f091d97be02bbee27caa4cbc90568" exitCode=2 Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003105 5024 generic.go:334] "Generic (PLEG): container finished" podID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerID="453fe40d93e66e502304a068ad2929ed9786723f39e280bc600514fa3e262642" exitCode=0 Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003114 5024 generic.go:334] "Generic (PLEG): container finished" podID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerID="15399e15920024a744428f891f3aa692bcbeb431aedbeb191349d55356927bd0" exitCode=0 Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003152 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerDied","Data":"8ba6d0b4c29eb5b7ff6d79409420d9196cbf0717b566b086afad11ba29572007"} Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003182 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerDied","Data":"75bf7c2ac4950386e8f6946434deed802e8f091d97be02bbee27caa4cbc90568"} Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003193 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerDied","Data":"453fe40d93e66e502304a068ad2929ed9786723f39e280bc600514fa3e262642"} Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.003202 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerDied","Data":"15399e15920024a744428f891f3aa692bcbeb431aedbeb191349d55356927bd0"} Oct 07 12:47:35 crc kubenswrapper[5024]: I1007 12:47:35.436158 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.653285 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.685719 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6t7w\" (UniqueName: \"kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.685764 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.685814 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.685868 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.685911 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.686005 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.686061 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml\") pod \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\" (UID: \"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1\") " Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.686959 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.688529 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.691877 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts" (OuterVolumeSpecName: "scripts") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.691923 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w" (OuterVolumeSpecName: "kube-api-access-r6t7w") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "kube-api-access-r6t7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.710844 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.743566 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.782862 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data" (OuterVolumeSpecName: "config-data") pod "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" (UID: "5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788221 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788251 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6t7w\" (UniqueName: \"kubernetes.io/projected/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-kube-api-access-r6t7w\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788263 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788272 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788282 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788292 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:38 crc kubenswrapper[5024]: I1007 12:47:38.788300 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.044298 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.044293 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1","Type":"ContainerDied","Data":"48d9f751105b7514218d814765858c1f63317c5606911dac3e982e5500e372bb"} Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.044474 5024 scope.go:117] "RemoveContainer" containerID="8ba6d0b4c29eb5b7ff6d79409420d9196cbf0717b566b086afad11ba29572007" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.047106 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" event={"ID":"2082746b-d351-4905-a3f6-320ff139b534","Type":"ContainerStarted","Data":"fa224f5e4caf5ba0204ee07c2121109c57c0d0079b3f8a8498713594104f1025"} Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.064277 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" podStartSLOduration=1.591959658 podStartE2EDuration="9.064255609s" podCreationTimestamp="2025-10-07 12:47:30 +0000 UTC" firstStartedPulling="2025-10-07 12:47:30.960963983 +0000 UTC m=+1189.036750821" lastFinishedPulling="2025-10-07 12:47:38.433259934 +0000 UTC m=+1196.509046772" observedRunningTime="2025-10-07 12:47:39.062230339 +0000 UTC m=+1197.138017177" watchObservedRunningTime="2025-10-07 12:47:39.064255609 +0000 UTC m=+1197.140042447" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.084322 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.085204 5024 scope.go:117] "RemoveContainer" containerID="75bf7c2ac4950386e8f6946434deed802e8f091d97be02bbee27caa4cbc90568" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.099922 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.118633 5024 scope.go:117] "RemoveContainer" containerID="453fe40d93e66e502304a068ad2929ed9786723f39e280bc600514fa3e262642" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.127319 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:39 crc kubenswrapper[5024]: E1007 12:47:39.128078 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-notification-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128100 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-notification-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: E1007 12:47:39.128129 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-central-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128148 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-central-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: E1007 12:47:39.128183 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="sg-core" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128190 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="sg-core" Oct 07 12:47:39 crc kubenswrapper[5024]: E1007 12:47:39.128207 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="proxy-httpd" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128213 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="proxy-httpd" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128561 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-central-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128586 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="sg-core" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128602 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="proxy-httpd" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.128620 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" containerName="ceilometer-notification-agent" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.133223 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.136778 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.138350 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.144061 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.153946 5024 scope.go:117] "RemoveContainer" containerID="15399e15920024a744428f891f3aa692bcbeb431aedbeb191349d55356927bd0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.194963 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195041 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195070 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6689z\" (UniqueName: \"kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.195222 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.296303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.296905 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.296986 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6689z\" (UniqueName: \"kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.297162 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.297247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.297314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.297407 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.297826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.298509 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.302179 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.302225 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.303168 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.313624 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.317203 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6689z\" (UniqueName: \"kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z\") pod \"ceilometer-0\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.516067 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:47:39 crc kubenswrapper[5024]: I1007 12:47:39.964301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:47:39 crc kubenswrapper[5024]: W1007 12:47:39.969756 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66a6990d_f48d_401c_953b_b09b8a654b10.slice/crio-24f03a44b790746cfca504a38522e038ee19144104c43e3bb96577944537f852 WatchSource:0}: Error finding container 24f03a44b790746cfca504a38522e038ee19144104c43e3bb96577944537f852: Status 404 returned error can't find the container with id 24f03a44b790746cfca504a38522e038ee19144104c43e3bb96577944537f852 Oct 07 12:47:40 crc kubenswrapper[5024]: I1007 12:47:40.058117 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerStarted","Data":"24f03a44b790746cfca504a38522e038ee19144104c43e3bb96577944537f852"} Oct 07 12:47:40 crc kubenswrapper[5024]: I1007 12:47:40.778944 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1" path="/var/lib/kubelet/pods/5e66dd04-a19f-4f24-91d5-cdd73fcbb6b1/volumes" Oct 07 12:47:41 crc kubenswrapper[5024]: I1007 12:47:41.071405 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerStarted","Data":"6cbfe7a575786b01693dff43ba9d22ccab005c0adb7d202c7dd5ddd331b2217e"} Oct 07 12:47:42 crc kubenswrapper[5024]: I1007 12:47:42.085463 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerStarted","Data":"3ba76ed76296e33904aad24b337568a9f178fd640c4586461cb934f4661b306c"} Oct 07 12:47:43 crc kubenswrapper[5024]: I1007 12:47:43.098629 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerStarted","Data":"8427f622c9f8076465aaeb85e6ceda15375c0c4f7a75799f3653136d09be222d"} Oct 07 12:47:43 crc kubenswrapper[5024]: I1007 12:47:43.720045 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:47:43 crc kubenswrapper[5024]: I1007 12:47:43.720541 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:47:44 crc kubenswrapper[5024]: I1007 12:47:44.112084 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerStarted","Data":"055b2675d510ed984215458ba9a34bdcb701fd3d2b5b6e199169ed4ced68bd39"} Oct 07 12:47:44 crc kubenswrapper[5024]: I1007 12:47:44.114449 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:47:44 crc kubenswrapper[5024]: I1007 12:47:44.148948 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.405602136 podStartE2EDuration="5.148927952s" podCreationTimestamp="2025-10-07 12:47:39 +0000 UTC" firstStartedPulling="2025-10-07 12:47:39.973312781 +0000 UTC m=+1198.049099619" lastFinishedPulling="2025-10-07 12:47:43.716638597 +0000 UTC m=+1201.792425435" observedRunningTime="2025-10-07 12:47:44.148510819 +0000 UTC m=+1202.224297677" watchObservedRunningTime="2025-10-07 12:47:44.148927952 +0000 UTC m=+1202.224714800" Oct 07 12:47:49 crc kubenswrapper[5024]: E1007 12:47:49.568199 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2082746b_d351_4905_a3f6_320ff139b534.slice/crio-conmon-fa224f5e4caf5ba0204ee07c2121109c57c0d0079b3f8a8498713594104f1025.scope\": RecentStats: unable to find data in memory cache]" Oct 07 12:47:50 crc kubenswrapper[5024]: I1007 12:47:50.169702 5024 generic.go:334] "Generic (PLEG): container finished" podID="2082746b-d351-4905-a3f6-320ff139b534" containerID="fa224f5e4caf5ba0204ee07c2121109c57c0d0079b3f8a8498713594104f1025" exitCode=0 Oct 07 12:47:50 crc kubenswrapper[5024]: I1007 12:47:50.169746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" event={"ID":"2082746b-d351-4905-a3f6-320ff139b534","Type":"ContainerDied","Data":"fa224f5e4caf5ba0204ee07c2121109c57c0d0079b3f8a8498713594104f1025"} Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.497121 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.606623 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle\") pod \"2082746b-d351-4905-a3f6-320ff139b534\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.606671 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz2jz\" (UniqueName: \"kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz\") pod \"2082746b-d351-4905-a3f6-320ff139b534\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.606761 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data\") pod \"2082746b-d351-4905-a3f6-320ff139b534\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.606887 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts\") pod \"2082746b-d351-4905-a3f6-320ff139b534\" (UID: \"2082746b-d351-4905-a3f6-320ff139b534\") " Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.612049 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts" (OuterVolumeSpecName: "scripts") pod "2082746b-d351-4905-a3f6-320ff139b534" (UID: "2082746b-d351-4905-a3f6-320ff139b534"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.612097 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz" (OuterVolumeSpecName: "kube-api-access-gz2jz") pod "2082746b-d351-4905-a3f6-320ff139b534" (UID: "2082746b-d351-4905-a3f6-320ff139b534"). InnerVolumeSpecName "kube-api-access-gz2jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.632303 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2082746b-d351-4905-a3f6-320ff139b534" (UID: "2082746b-d351-4905-a3f6-320ff139b534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.636439 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data" (OuterVolumeSpecName: "config-data") pod "2082746b-d351-4905-a3f6-320ff139b534" (UID: "2082746b-d351-4905-a3f6-320ff139b534"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.709105 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.709611 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz2jz\" (UniqueName: \"kubernetes.io/projected/2082746b-d351-4905-a3f6-320ff139b534-kube-api-access-gz2jz\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.709703 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:51 crc kubenswrapper[5024]: I1007 12:47:51.709790 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2082746b-d351-4905-a3f6-320ff139b534-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.185167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" event={"ID":"2082746b-d351-4905-a3f6-320ff139b534","Type":"ContainerDied","Data":"cb6005803a96e7d7e6fa8ffe740ba354f5f76c8bf26f6b241e9d1028499d104f"} Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.185211 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6005803a96e7d7e6fa8ffe740ba354f5f76c8bf26f6b241e9d1028499d104f" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.185214 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fgvjl" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.281125 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 07 12:47:52 crc kubenswrapper[5024]: E1007 12:47:52.281478 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2082746b-d351-4905-a3f6-320ff139b534" containerName="nova-cell0-conductor-db-sync" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.281490 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2082746b-d351-4905-a3f6-320ff139b534" containerName="nova-cell0-conductor-db-sync" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.281660 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2082746b-d351-4905-a3f6-320ff139b534" containerName="nova-cell0-conductor-db-sync" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.282240 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.294979 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.321127 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.321438 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7z4b8" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.425361 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.425439 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gm2\" (UniqueName: \"kubernetes.io/projected/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-kube-api-access-68gm2\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.425581 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.527025 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.527082 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gm2\" (UniqueName: \"kubernetes.io/projected/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-kube-api-access-68gm2\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.527158 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.533003 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.542700 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.543515 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gm2\" (UniqueName: \"kubernetes.io/projected/ffd37ac7-f3e0-4bb1-8756-7c2700af5cad-kube-api-access-68gm2\") pod \"nova-cell0-conductor-0\" (UID: \"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad\") " pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:52 crc kubenswrapper[5024]: I1007 12:47:52.634730 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:53 crc kubenswrapper[5024]: I1007 12:47:53.074263 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 07 12:47:53 crc kubenswrapper[5024]: W1007 12:47:53.083382 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffd37ac7_f3e0_4bb1_8756_7c2700af5cad.slice/crio-1bce151ac351a72a36f2609526143bff27fce4e8a89594465bf6b5f9831647cb WatchSource:0}: Error finding container 1bce151ac351a72a36f2609526143bff27fce4e8a89594465bf6b5f9831647cb: Status 404 returned error can't find the container with id 1bce151ac351a72a36f2609526143bff27fce4e8a89594465bf6b5f9831647cb Oct 07 12:47:53 crc kubenswrapper[5024]: I1007 12:47:53.192326 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad","Type":"ContainerStarted","Data":"1bce151ac351a72a36f2609526143bff27fce4e8a89594465bf6b5f9831647cb"} Oct 07 12:47:54 crc kubenswrapper[5024]: I1007 12:47:54.216080 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ffd37ac7-f3e0-4bb1-8756-7c2700af5cad","Type":"ContainerStarted","Data":"81ceabe4bb90d8ebde1cf2837f31ecdf198c9f56e50b7390642ebc668abc7d79"} Oct 07 12:47:54 crc kubenswrapper[5024]: I1007 12:47:54.216325 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 07 12:47:54 crc kubenswrapper[5024]: I1007 12:47:54.236600 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.236577482 podStartE2EDuration="2.236577482s" podCreationTimestamp="2025-10-07 12:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:47:54.23413025 +0000 UTC m=+1212.309917108" watchObservedRunningTime="2025-10-07 12:47:54.236577482 +0000 UTC m=+1212.312364340" Oct 07 12:48:02 crc kubenswrapper[5024]: I1007 12:48:02.663020 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.080312 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bd6lw"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.081515 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.083334 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.083610 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.103271 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd6lw"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.215720 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvx4p\" (UniqueName: \"kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.215776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.215857 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.215888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.260049 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.261387 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.263944 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.283944 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.319452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.319507 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.319598 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvx4p\" (UniqueName: \"kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.319626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.341502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.342543 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.348824 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.365081 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvx4p\" (UniqueName: \"kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p\") pod \"nova-cell0-cell-mapping-bd6lw\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.365202 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.372356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.376860 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.380082 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.423817 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884hb\" (UniqueName: \"kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.423883 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.423913 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.423981 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.451276 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.455670 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.456837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.473466 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.489907 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.510430 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.514992 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.515740 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.527893 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.527952 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.527987 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.528081 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.528102 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.529617 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.528134 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcrk\" (UniqueName: \"kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.532720 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884hb\" (UniqueName: \"kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.539048 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.541473 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.549882 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.569911 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884hb\" (UniqueName: \"kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb\") pod \"nova-api-0\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.581018 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.595733 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.597334 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.608826 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635657 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptv8\" (UniqueName: \"kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635705 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57hf\" (UniqueName: \"kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635796 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635846 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbcrk\" (UniqueName: \"kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635881 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635911 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.635926 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.639958 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.643332 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.671057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbcrk\" (UniqueName: \"kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk\") pod \"nova-scheduler-0\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737330 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737427 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737472 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737497 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737568 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737626 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737643 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737659 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737685 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bf5r\" (UniqueName: \"kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737715 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fptv8\" (UniqueName: \"kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.737737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57hf\" (UniqueName: \"kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.738636 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.741225 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.741719 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.748984 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.751624 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.752694 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.755173 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57hf\" (UniqueName: \"kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf\") pod \"nova-metadata-0\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.764442 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptv8\" (UniqueName: \"kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8\") pod \"nova-cell1-novncproxy-0\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.807880 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.838963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.839064 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.839100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bf5r\" (UniqueName: \"kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.839285 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.839336 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.839916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.840076 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.840202 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.840224 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.859826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bf5r\" (UniqueName: \"kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r\") pod \"dnsmasq-dns-566b5b7845-j5fsn\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.940607 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:03 crc kubenswrapper[5024]: I1007 12:48:03.948322 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.077502 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd6lw"] Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.165378 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:04 crc kubenswrapper[5024]: W1007 12:48:04.171449 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb081f8_d831_4362_8169_d4b183854adc.slice/crio-138dd63d04382e6eb7ed2091d2f658b3053d4cc6c0535c8fd080680e930640ff WatchSource:0}: Error finding container 138dd63d04382e6eb7ed2091d2f658b3053d4cc6c0535c8fd080680e930640ff: Status 404 returned error can't find the container with id 138dd63d04382e6eb7ed2091d2f658b3053d4cc6c0535c8fd080680e930640ff Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.193790 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bm8ht"] Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.196352 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.199629 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.199692 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.231448 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bm8ht"] Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.305297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd6lw" event={"ID":"17f46fbe-963b-4d59-b9a8-3d02e31157a3","Type":"ContainerStarted","Data":"c1da9b1a74685ce60c822fce7215110839ecd44171f6dcc1aa582f09aa6f6ba3"} Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.312490 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.319173 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerStarted","Data":"138dd63d04382e6eb7ed2091d2f658b3053d4cc6c0535c8fd080680e930640ff"} Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.350863 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.350956 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.350989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.351038 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4vl\" (UniqueName: \"kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.405845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:04 crc kubenswrapper[5024]: W1007 12:48:04.423321 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod618f045f_fd3a_43df_a6fb_94db233769df.slice/crio-2fc66c1807d1309544f1a8faf7e687532bc1729eb5dcbea9f293413de91bc52d WatchSource:0}: Error finding container 2fc66c1807d1309544f1a8faf7e687532bc1729eb5dcbea9f293413de91bc52d: Status 404 returned error can't find the container with id 2fc66c1807d1309544f1a8faf7e687532bc1729eb5dcbea9f293413de91bc52d Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.455073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.455158 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.455247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4vl\" (UniqueName: \"kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.455395 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.460131 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.460751 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.461575 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.471502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4vl\" (UniqueName: \"kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl\") pod \"nova-cell1-conductor-db-sync-bm8ht\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.544304 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.572954 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:04 crc kubenswrapper[5024]: I1007 12:48:04.597937 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.011653 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bm8ht"] Oct 07 12:48:05 crc kubenswrapper[5024]: W1007 12:48:05.026067 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca249de4_0737_4c42_b2e9_78a0abf2bf94.slice/crio-c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299 WatchSource:0}: Error finding container c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299: Status 404 returned error can't find the container with id c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299 Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.341822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" event={"ID":"ca249de4-0737-4c42-b2e9-78a0abf2bf94","Type":"ContainerStarted","Data":"7e6fcbe2297b522c45d781a8673b5cc18a2f604eb779dc6aa11fc2debb78d89b"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.341869 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" event={"ID":"ca249de4-0737-4c42-b2e9-78a0abf2bf94","Type":"ContainerStarted","Data":"c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.344511 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerStarted","Data":"4b7e683abdb78faba3bde2a20e2726338a5577f46ff9536dca62ef9583a4cead"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.348224 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"618f045f-fd3a-43df-a6fb-94db233769df","Type":"ContainerStarted","Data":"2fc66c1807d1309544f1a8faf7e687532bc1729eb5dcbea9f293413de91bc52d"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.349712 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aea4531c-c489-4110-9ce3-ddffc63d0ab8","Type":"ContainerStarted","Data":"0bf3a7ecee8034f0372311c574b2a502388f73ca45c226d69159069300d9b3ad"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.353586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd6lw" event={"ID":"17f46fbe-963b-4d59-b9a8-3d02e31157a3","Type":"ContainerStarted","Data":"5559e9e74501b9091eee3ee4e0089c64fe74a7fb253cb68b4fa85a7f0dbce80a"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.357375 5024 generic.go:334] "Generic (PLEG): container finished" podID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerID="591f7250acdd5d3b293f1b98aee24e91351c2a95e6199891995e0bdaa8f62b01" exitCode=0 Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.357415 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" event={"ID":"f545749d-e342-4e17-85b9-23f17ace4fdf","Type":"ContainerDied","Data":"591f7250acdd5d3b293f1b98aee24e91351c2a95e6199891995e0bdaa8f62b01"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.357438 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" event={"ID":"f545749d-e342-4e17-85b9-23f17ace4fdf","Type":"ContainerStarted","Data":"e7f23654d6dbcb6e9cf809624484692c3cb1b4aabdee93f8dcb9568fdbbb5471"} Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.367486 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" podStartSLOduration=1.367461053 podStartE2EDuration="1.367461053s" podCreationTimestamp="2025-10-07 12:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:05.357190972 +0000 UTC m=+1223.432977810" watchObservedRunningTime="2025-10-07 12:48:05.367461053 +0000 UTC m=+1223.443247891" Oct 07 12:48:05 crc kubenswrapper[5024]: I1007 12:48:05.383657 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bd6lw" podStartSLOduration=2.383635838 podStartE2EDuration="2.383635838s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:05.373760308 +0000 UTC m=+1223.449547146" watchObservedRunningTime="2025-10-07 12:48:05.383635838 +0000 UTC m=+1223.459422676" Oct 07 12:48:06 crc kubenswrapper[5024]: I1007 12:48:06.376238 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" event={"ID":"f545749d-e342-4e17-85b9-23f17ace4fdf","Type":"ContainerStarted","Data":"c7fcc188dd585e00581d121e25d565d7a7fecc6aaff4afe87043b0d1a887b04a"} Oct 07 12:48:06 crc kubenswrapper[5024]: I1007 12:48:06.405100 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" podStartSLOduration=3.405076189 podStartE2EDuration="3.405076189s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:06.396987922 +0000 UTC m=+1224.472774760" watchObservedRunningTime="2025-10-07 12:48:06.405076189 +0000 UTC m=+1224.480863027" Oct 07 12:48:06 crc kubenswrapper[5024]: I1007 12:48:06.959621 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:06 crc kubenswrapper[5024]: I1007 12:48:06.978480 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:07 crc kubenswrapper[5024]: I1007 12:48:07.389236 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.397551 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerStarted","Data":"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.398234 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerStarted","Data":"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.397699 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-metadata" containerID="cri-o://13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" gracePeriod=30 Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.397610 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-log" containerID="cri-o://8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" gracePeriod=30 Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.401254 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerStarted","Data":"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.401298 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerStarted","Data":"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.403705 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"618f045f-fd3a-43df-a6fb-94db233769df","Type":"ContainerStarted","Data":"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.403867 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="618f045f-fd3a-43df-a6fb-94db233769df" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40" gracePeriod=30 Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.413563 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aea4531c-c489-4110-9ce3-ddffc63d0ab8","Type":"ContainerStarted","Data":"150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00"} Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.424489 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.615727046 podStartE2EDuration="5.424472341s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="2025-10-07 12:48:04.554605672 +0000 UTC m=+1222.630392510" lastFinishedPulling="2025-10-07 12:48:07.363350977 +0000 UTC m=+1225.439137805" observedRunningTime="2025-10-07 12:48:08.422068571 +0000 UTC m=+1226.497855409" watchObservedRunningTime="2025-10-07 12:48:08.424472341 +0000 UTC m=+1226.500259179" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.448757 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.26339876 podStartE2EDuration="5.448741513s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="2025-10-07 12:48:04.176771067 +0000 UTC m=+1222.252557905" lastFinishedPulling="2025-10-07 12:48:07.36211382 +0000 UTC m=+1225.437900658" observedRunningTime="2025-10-07 12:48:08.439577774 +0000 UTC m=+1226.515364612" watchObservedRunningTime="2025-10-07 12:48:08.448741513 +0000 UTC m=+1226.524528351" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.460548 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.422310042 podStartE2EDuration="5.460532249s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="2025-10-07 12:48:04.318754693 +0000 UTC m=+1222.394541531" lastFinishedPulling="2025-10-07 12:48:07.3569769 +0000 UTC m=+1225.432763738" observedRunningTime="2025-10-07 12:48:08.455464771 +0000 UTC m=+1226.531251609" watchObservedRunningTime="2025-10-07 12:48:08.460532249 +0000 UTC m=+1226.536319087" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.473496 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.539175791 podStartE2EDuration="5.473480699s" podCreationTimestamp="2025-10-07 12:48:03 +0000 UTC" firstStartedPulling="2025-10-07 12:48:04.427918776 +0000 UTC m=+1222.503705614" lastFinishedPulling="2025-10-07 12:48:07.362223694 +0000 UTC m=+1225.438010522" observedRunningTime="2025-10-07 12:48:08.468367389 +0000 UTC m=+1226.544154237" watchObservedRunningTime="2025-10-07 12:48:08.473480699 +0000 UTC m=+1226.549267537" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.768678 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.808279 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.941982 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.942037 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:08 crc kubenswrapper[5024]: I1007 12:48:08.958706 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.079377 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data\") pod \"323a81dc-ab24-4fff-846c-4a18bef330d3\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.079439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs\") pod \"323a81dc-ab24-4fff-846c-4a18bef330d3\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.079468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle\") pod \"323a81dc-ab24-4fff-846c-4a18bef330d3\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.079501 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r57hf\" (UniqueName: \"kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf\") pod \"323a81dc-ab24-4fff-846c-4a18bef330d3\" (UID: \"323a81dc-ab24-4fff-846c-4a18bef330d3\") " Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.080971 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs" (OuterVolumeSpecName: "logs") pod "323a81dc-ab24-4fff-846c-4a18bef330d3" (UID: "323a81dc-ab24-4fff-846c-4a18bef330d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.085278 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf" (OuterVolumeSpecName: "kube-api-access-r57hf") pod "323a81dc-ab24-4fff-846c-4a18bef330d3" (UID: "323a81dc-ab24-4fff-846c-4a18bef330d3"). InnerVolumeSpecName "kube-api-access-r57hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.104864 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323a81dc-ab24-4fff-846c-4a18bef330d3" (UID: "323a81dc-ab24-4fff-846c-4a18bef330d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.106729 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data" (OuterVolumeSpecName: "config-data") pod "323a81dc-ab24-4fff-846c-4a18bef330d3" (UID: "323a81dc-ab24-4fff-846c-4a18bef330d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.181004 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323a81dc-ab24-4fff-846c-4a18bef330d3-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.181051 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.181078 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r57hf\" (UniqueName: \"kubernetes.io/projected/323a81dc-ab24-4fff-846c-4a18bef330d3-kube-api-access-r57hf\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.181095 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323a81dc-ab24-4fff-846c-4a18bef330d3-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.425830 5024 generic.go:334] "Generic (PLEG): container finished" podID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerID="13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" exitCode=0 Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.425879 5024 generic.go:334] "Generic (PLEG): container finished" podID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerID="8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" exitCode=143 Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.427180 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.432288 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerDied","Data":"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e"} Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.432409 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerDied","Data":"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579"} Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.432437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"323a81dc-ab24-4fff-846c-4a18bef330d3","Type":"ContainerDied","Data":"4b7e683abdb78faba3bde2a20e2726338a5577f46ff9536dca62ef9583a4cead"} Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.432465 5024 scope.go:117] "RemoveContainer" containerID="13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.471185 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.476087 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.500911 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:09 crc kubenswrapper[5024]: E1007 12:48:09.501501 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-metadata" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.501523 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-metadata" Oct 07 12:48:09 crc kubenswrapper[5024]: E1007 12:48:09.501550 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-log" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.501557 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-log" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.501784 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-log" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.501805 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" containerName="nova-metadata-metadata" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.502986 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.508585 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.508997 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.510458 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.522837 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.566696 5024 scope.go:117] "RemoveContainer" containerID="8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.590188 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.590232 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.590255 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.590502 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.590564 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwgp\" (UniqueName: \"kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.591246 5024 scope.go:117] "RemoveContainer" containerID="13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" Oct 07 12:48:09 crc kubenswrapper[5024]: E1007 12:48:09.591660 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e\": container with ID starting with 13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e not found: ID does not exist" containerID="13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.591695 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e"} err="failed to get container status \"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e\": rpc error: code = NotFound desc = could not find container \"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e\": container with ID starting with 13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e not found: ID does not exist" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.591719 5024 scope.go:117] "RemoveContainer" containerID="8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" Oct 07 12:48:09 crc kubenswrapper[5024]: E1007 12:48:09.592013 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579\": container with ID starting with 8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579 not found: ID does not exist" containerID="8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.592052 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579"} err="failed to get container status \"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579\": rpc error: code = NotFound desc = could not find container \"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579\": container with ID starting with 8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579 not found: ID does not exist" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.592090 5024 scope.go:117] "RemoveContainer" containerID="13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.592490 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e"} err="failed to get container status \"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e\": rpc error: code = NotFound desc = could not find container \"13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e\": container with ID starting with 13bfb6fa0be7d29d97cec893e1f3cf37f2568817d590392e3fe73e338c32f05e not found: ID does not exist" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.592548 5024 scope.go:117] "RemoveContainer" containerID="8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.593393 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579"} err="failed to get container status \"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579\": rpc error: code = NotFound desc = could not find container \"8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579\": container with ID starting with 8ff44bf7cc3733e17b7db7d69138629073a583383cd32d03a22b480cfddb7579 not found: ID does not exist" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.691882 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.692278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.692310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwgp\" (UniqueName: \"kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.692396 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.692413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.692860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.695775 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.696796 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.707438 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwgp\" (UniqueName: \"kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.708455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data\") pod \"nova-metadata-0\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " pod="openstack/nova-metadata-0" Oct 07 12:48:09 crc kubenswrapper[5024]: I1007 12:48:09.822411 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:10 crc kubenswrapper[5024]: I1007 12:48:10.365794 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:10 crc kubenswrapper[5024]: I1007 12:48:10.439377 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerStarted","Data":"ffb7dab466b00c071944be5fac196c4a0c1af6d9712c240b6f4ec5e55e562392"} Oct 07 12:48:10 crc kubenswrapper[5024]: I1007 12:48:10.762048 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323a81dc-ab24-4fff-846c-4a18bef330d3" path="/var/lib/kubelet/pods/323a81dc-ab24-4fff-846c-4a18bef330d3/volumes" Oct 07 12:48:11 crc kubenswrapper[5024]: I1007 12:48:11.449734 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerStarted","Data":"057a68ce4aa4280d55cec9a0c7f8b0bd553774040c4981793d8167202db6fb0e"} Oct 07 12:48:11 crc kubenswrapper[5024]: I1007 12:48:11.449774 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerStarted","Data":"c8d7cb3be33726418188f0cced5d15f279a3c4b749da780a2e42ed37959c55a3"} Oct 07 12:48:11 crc kubenswrapper[5024]: I1007 12:48:11.469961 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.469941912 podStartE2EDuration="2.469941912s" podCreationTimestamp="2025-10-07 12:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:11.466265324 +0000 UTC m=+1229.542052172" watchObservedRunningTime="2025-10-07 12:48:11.469941912 +0000 UTC m=+1229.545728750" Oct 07 12:48:12 crc kubenswrapper[5024]: I1007 12:48:12.460970 5024 generic.go:334] "Generic (PLEG): container finished" podID="17f46fbe-963b-4d59-b9a8-3d02e31157a3" containerID="5559e9e74501b9091eee3ee4e0089c64fe74a7fb253cb68b4fa85a7f0dbce80a" exitCode=0 Oct 07 12:48:12 crc kubenswrapper[5024]: I1007 12:48:12.462316 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd6lw" event={"ID":"17f46fbe-963b-4d59-b9a8-3d02e31157a3","Type":"ContainerDied","Data":"5559e9e74501b9091eee3ee4e0089c64fe74a7fb253cb68b4fa85a7f0dbce80a"} Oct 07 12:48:12 crc kubenswrapper[5024]: I1007 12:48:12.480666 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:12 crc kubenswrapper[5024]: I1007 12:48:12.480944 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f381de8f-0818-456c-9acb-7ee939a6da12" containerName="kube-state-metrics" containerID="cri-o://aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4" gracePeriod=30 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.036691 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.149994 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttg5\" (UniqueName: \"kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5\") pod \"f381de8f-0818-456c-9acb-7ee939a6da12\" (UID: \"f381de8f-0818-456c-9acb-7ee939a6da12\") " Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.154816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5" (OuterVolumeSpecName: "kube-api-access-bttg5") pod "f381de8f-0818-456c-9acb-7ee939a6da12" (UID: "f381de8f-0818-456c-9acb-7ee939a6da12"). InnerVolumeSpecName "kube-api-access-bttg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.252444 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttg5\" (UniqueName: \"kubernetes.io/projected/f381de8f-0818-456c-9acb-7ee939a6da12-kube-api-access-bttg5\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.470488 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.470808 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-central-agent" containerID="cri-o://6cbfe7a575786b01693dff43ba9d22ccab005c0adb7d202c7dd5ddd331b2217e" gracePeriod=30 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.470857 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="proxy-httpd" containerID="cri-o://055b2675d510ed984215458ba9a34bdcb701fd3d2b5b6e199169ed4ced68bd39" gracePeriod=30 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.470884 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-notification-agent" containerID="cri-o://3ba76ed76296e33904aad24b337568a9f178fd640c4586461cb934f4661b306c" gracePeriod=30 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.470896 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="sg-core" containerID="cri-o://8427f622c9f8076465aaeb85e6ceda15375c0c4f7a75799f3653136d09be222d" gracePeriod=30 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.474343 5024 generic.go:334] "Generic (PLEG): container finished" podID="f381de8f-0818-456c-9acb-7ee939a6da12" containerID="aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4" exitCode=2 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.474543 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.474992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f381de8f-0818-456c-9acb-7ee939a6da12","Type":"ContainerDied","Data":"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4"} Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.475056 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f381de8f-0818-456c-9acb-7ee939a6da12","Type":"ContainerDied","Data":"43e00eb93dd077c7c1f557de0c81413bc74cde2f7e6abc216dd6e9703534a952"} Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.475072 5024 scope.go:117] "RemoveContainer" containerID="aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.512818 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.528522 5024 scope.go:117] "RemoveContainer" containerID="aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.529706 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:13 crc kubenswrapper[5024]: E1007 12:48:13.531626 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4\": container with ID starting with aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4 not found: ID does not exist" containerID="aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.531670 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4"} err="failed to get container status \"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4\": rpc error: code = NotFound desc = could not find container \"aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4\": container with ID starting with aedc2182176b1f3371afcd98a133f480ec283228f12e38c29466cbea78c1adc4 not found: ID does not exist" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.540986 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:13 crc kubenswrapper[5024]: E1007 12:48:13.541702 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f381de8f-0818-456c-9acb-7ee939a6da12" containerName="kube-state-metrics" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.541728 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f381de8f-0818-456c-9acb-7ee939a6da12" containerName="kube-state-metrics" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.541948 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f381de8f-0818-456c-9acb-7ee939a6da12" containerName="kube-state-metrics" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.542810 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.545904 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.548548 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.581721 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.582279 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.582382 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.664460 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.664617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzlj\" (UniqueName: \"kubernetes.io/projected/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-api-access-xlzlj\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.664730 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.664840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.720777 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.720854 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.720923 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.721987 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.722063 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe" gracePeriod=600 Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.752702 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.767498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.767815 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.767969 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.768081 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzlj\" (UniqueName: \"kubernetes.io/projected/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-api-access-xlzlj\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.776833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.783602 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.807198 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91789504-a808-4dab-99d4-a3ad6eb1751f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.808462 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzlj\" (UniqueName: \"kubernetes.io/projected/91789504-a808-4dab-99d4-a3ad6eb1751f-kube-api-access-xlzlj\") pod \"kube-state-metrics-0\" (UID: \"91789504-a808-4dab-99d4-a3ad6eb1751f\") " pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.824240 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.865730 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.953772 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:13 crc kubenswrapper[5024]: I1007 12:48:13.995199 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.039508 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.039771 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="dnsmasq-dns" containerID="cri-o://09a96a2de283cd00b5d157973c2eb63d7343c811a7e5236da83eb4fe120bca13" gracePeriod=10 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.071851 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle\") pod \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.071935 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data\") pod \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.072963 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts\") pod \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.073425 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvx4p\" (UniqueName: \"kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p\") pod \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\" (UID: \"17f46fbe-963b-4d59-b9a8-3d02e31157a3\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.087388 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p" (OuterVolumeSpecName: "kube-api-access-bvx4p") pod "17f46fbe-963b-4d59-b9a8-3d02e31157a3" (UID: "17f46fbe-963b-4d59-b9a8-3d02e31157a3"). InnerVolumeSpecName "kube-api-access-bvx4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.087509 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts" (OuterVolumeSpecName: "scripts") pod "17f46fbe-963b-4d59-b9a8-3d02e31157a3" (UID: "17f46fbe-963b-4d59-b9a8-3d02e31157a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.152034 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data" (OuterVolumeSpecName: "config-data") pod "17f46fbe-963b-4d59-b9a8-3d02e31157a3" (UID: "17f46fbe-963b-4d59-b9a8-3d02e31157a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.174850 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.174920 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.174934 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvx4p\" (UniqueName: \"kubernetes.io/projected/17f46fbe-963b-4d59-b9a8-3d02e31157a3-kube-api-access-bvx4p\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.175279 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17f46fbe-963b-4d59-b9a8-3d02e31157a3" (UID: "17f46fbe-963b-4d59-b9a8-3d02e31157a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.277009 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f46fbe-963b-4d59-b9a8-3d02e31157a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.420044 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.500733 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd6lw" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.500741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd6lw" event={"ID":"17f46fbe-963b-4d59-b9a8-3d02e31157a3","Type":"ContainerDied","Data":"c1da9b1a74685ce60c822fce7215110839ecd44171f6dcc1aa582f09aa6f6ba3"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.500829 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1da9b1a74685ce60c822fce7215110839ecd44171f6dcc1aa582f09aa6f6ba3" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.506363 5024 generic.go:334] "Generic (PLEG): container finished" podID="ca249de4-0737-4c42-b2e9-78a0abf2bf94" containerID="7e6fcbe2297b522c45d781a8673b5cc18a2f604eb779dc6aa11fc2debb78d89b" exitCode=0 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.506443 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" event={"ID":"ca249de4-0737-4c42-b2e9-78a0abf2bf94","Type":"ContainerDied","Data":"7e6fcbe2297b522c45d781a8673b5cc18a2f604eb779dc6aa11fc2debb78d89b"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519575 5024 generic.go:334] "Generic (PLEG): container finished" podID="66a6990d-f48d-401c-953b-b09b8a654b10" containerID="055b2675d510ed984215458ba9a34bdcb701fd3d2b5b6e199169ed4ced68bd39" exitCode=0 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519610 5024 generic.go:334] "Generic (PLEG): container finished" podID="66a6990d-f48d-401c-953b-b09b8a654b10" containerID="8427f622c9f8076465aaeb85e6ceda15375c0c4f7a75799f3653136d09be222d" exitCode=2 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519620 5024 generic.go:334] "Generic (PLEG): container finished" podID="66a6990d-f48d-401c-953b-b09b8a654b10" containerID="6cbfe7a575786b01693dff43ba9d22ccab005c0adb7d202c7dd5ddd331b2217e" exitCode=0 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519671 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerDied","Data":"055b2675d510ed984215458ba9a34bdcb701fd3d2b5b6e199169ed4ced68bd39"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519698 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerDied","Data":"8427f622c9f8076465aaeb85e6ceda15375c0c4f7a75799f3653136d09be222d"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.519712 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerDied","Data":"6cbfe7a575786b01693dff43ba9d22ccab005c0adb7d202c7dd5ddd331b2217e"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.529219 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe" exitCode=0 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.529291 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.529324 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.529342 5024 scope.go:117] "RemoveContainer" containerID="b0e095ff552b6ff8a1e3e80992a870a1ae734d6958dd93421411f9fc1d15e1a0" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.535244 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91789504-a808-4dab-99d4-a3ad6eb1751f","Type":"ContainerStarted","Data":"58928f4499c3ce58724b961df56c578ff6db6e2eed0ab7c45ce02409942a2c07"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.543859 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ebb6498-3228-4baa-984b-d76104094326" containerID="09a96a2de283cd00b5d157973c2eb63d7343c811a7e5236da83eb4fe120bca13" exitCode=0 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.544201 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" event={"ID":"0ebb6498-3228-4baa-984b-d76104094326","Type":"ContainerDied","Data":"09a96a2de283cd00b5d157973c2eb63d7343c811a7e5236da83eb4fe120bca13"} Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.588326 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.610101 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.670358 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.670679 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.797775 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f381de8f-0818-456c-9acb-7ee939a6da12" path="/var/lib/kubelet/pods/f381de8f-0818-456c-9acb-7ee939a6da12/volumes" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.804902 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb\") pod \"0ebb6498-3228-4baa-984b-d76104094326\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.806186 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb\") pod \"0ebb6498-3228-4baa-984b-d76104094326\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.806216 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config\") pod \"0ebb6498-3228-4baa-984b-d76104094326\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.806273 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc\") pod \"0ebb6498-3228-4baa-984b-d76104094326\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.806296 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckc7d\" (UniqueName: \"kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d\") pod \"0ebb6498-3228-4baa-984b-d76104094326\" (UID: \"0ebb6498-3228-4baa-984b-d76104094326\") " Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.814473 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.814511 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.814709 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-log" containerID="cri-o://c8d7cb3be33726418188f0cced5d15f279a3c4b749da780a2e42ed37959c55a3" gracePeriod=30 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.815082 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-metadata" containerID="cri-o://057a68ce4aa4280d55cec9a0c7f8b0bd553774040c4981793d8167202db6fb0e" gracePeriod=30 Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.827289 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.827471 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.856603 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d" (OuterVolumeSpecName: "kube-api-access-ckc7d") pod "0ebb6498-3228-4baa-984b-d76104094326" (UID: "0ebb6498-3228-4baa-984b-d76104094326"). InnerVolumeSpecName "kube-api-access-ckc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.907877 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckc7d\" (UniqueName: \"kubernetes.io/projected/0ebb6498-3228-4baa-984b-d76104094326-kube-api-access-ckc7d\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:14 crc kubenswrapper[5024]: I1007 12:48:14.990241 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ebb6498-3228-4baa-984b-d76104094326" (UID: "0ebb6498-3228-4baa-984b-d76104094326"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.007816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config" (OuterVolumeSpecName: "config") pod "0ebb6498-3228-4baa-984b-d76104094326" (UID: "0ebb6498-3228-4baa-984b-d76104094326"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.009410 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.009485 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.061307 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ebb6498-3228-4baa-984b-d76104094326" (UID: "0ebb6498-3228-4baa-984b-d76104094326"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.068964 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ebb6498-3228-4baa-984b-d76104094326" (UID: "0ebb6498-3228-4baa-984b-d76104094326"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.110833 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.110866 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebb6498-3228-4baa-984b-d76104094326-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.446958 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.561836 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91789504-a808-4dab-99d4-a3ad6eb1751f","Type":"ContainerStarted","Data":"01968b759dc5a862a6b14a7fbae8c9a368b6214b3c5c0484c48796eacd5657ea"} Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.562236 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.564428 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.564441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" event={"ID":"0ebb6498-3228-4baa-984b-d76104094326","Type":"ContainerDied","Data":"cb02a4ec54ed84447e1d2c059e0594c67766da328db2427c6be94ce1206257b1"} Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.564511 5024 scope.go:117] "RemoveContainer" containerID="09a96a2de283cd00b5d157973c2eb63d7343c811a7e5236da83eb4fe120bca13" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.574433 5024 generic.go:334] "Generic (PLEG): container finished" podID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerID="057a68ce4aa4280d55cec9a0c7f8b0bd553774040c4981793d8167202db6fb0e" exitCode=0 Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.574487 5024 generic.go:334] "Generic (PLEG): container finished" podID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerID="c8d7cb3be33726418188f0cced5d15f279a3c4b749da780a2e42ed37959c55a3" exitCode=143 Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.574684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerDied","Data":"057a68ce4aa4280d55cec9a0c7f8b0bd553774040c4981793d8167202db6fb0e"} Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.574744 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerDied","Data":"c8d7cb3be33726418188f0cced5d15f279a3c4b749da780a2e42ed37959c55a3"} Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.574981 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-log" containerID="cri-o://dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5" gracePeriod=30 Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.575231 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-api" containerID="cri-o://56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de" gracePeriod=30 Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.588264 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.042722449 podStartE2EDuration="2.588241331s" podCreationTimestamp="2025-10-07 12:48:13 +0000 UTC" firstStartedPulling="2025-10-07 12:48:14.464176358 +0000 UTC m=+1232.539963196" lastFinishedPulling="2025-10-07 12:48:15.00969524 +0000 UTC m=+1233.085482078" observedRunningTime="2025-10-07 12:48:15.57491611 +0000 UTC m=+1233.650702998" watchObservedRunningTime="2025-10-07 12:48:15.588241331 +0000 UTC m=+1233.664028169" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.614226 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.630215 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-s98w8"] Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.646329 5024 scope.go:117] "RemoveContainer" containerID="fd9c7ed4f5a0bb958ca84e1d28678db7b4f7f9c0bbd1d385d7a561980c2b1158" Oct 07 12:48:15 crc kubenswrapper[5024]: I1007 12:48:15.960775 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.013591 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144461 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwgp\" (UniqueName: \"kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp\") pod \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144542 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle\") pod \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144573 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data\") pod \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144628 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs\") pod \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144677 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts\") pod \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144751 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data\") pod \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144863 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle\") pod \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144927 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4vl\" (UniqueName: \"kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl\") pod \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\" (UID: \"ca249de4-0737-4c42-b2e9-78a0abf2bf94\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.144971 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs\") pod \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\" (UID: \"81bf5937-61dd-49d6-b99f-8cc34a5939b5\") " Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.150414 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp" (OuterVolumeSpecName: "kube-api-access-qhwgp") pod "81bf5937-61dd-49d6-b99f-8cc34a5939b5" (UID: "81bf5937-61dd-49d6-b99f-8cc34a5939b5"). InnerVolumeSpecName "kube-api-access-qhwgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.152978 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts" (OuterVolumeSpecName: "scripts") pod "ca249de4-0737-4c42-b2e9-78a0abf2bf94" (UID: "ca249de4-0737-4c42-b2e9-78a0abf2bf94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.155729 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs" (OuterVolumeSpecName: "logs") pod "81bf5937-61dd-49d6-b99f-8cc34a5939b5" (UID: "81bf5937-61dd-49d6-b99f-8cc34a5939b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.156294 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl" (OuterVolumeSpecName: "kube-api-access-hr4vl") pod "ca249de4-0737-4c42-b2e9-78a0abf2bf94" (UID: "ca249de4-0737-4c42-b2e9-78a0abf2bf94"). InnerVolumeSpecName "kube-api-access-hr4vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.182901 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data" (OuterVolumeSpecName: "config-data") pod "81bf5937-61dd-49d6-b99f-8cc34a5939b5" (UID: "81bf5937-61dd-49d6-b99f-8cc34a5939b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.193719 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca249de4-0737-4c42-b2e9-78a0abf2bf94" (UID: "ca249de4-0737-4c42-b2e9-78a0abf2bf94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.200282 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81bf5937-61dd-49d6-b99f-8cc34a5939b5" (UID: "81bf5937-61dd-49d6-b99f-8cc34a5939b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.207630 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data" (OuterVolumeSpecName: "config-data") pod "ca249de4-0737-4c42-b2e9-78a0abf2bf94" (UID: "ca249de4-0737-4c42-b2e9-78a0abf2bf94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.225800 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "81bf5937-61dd-49d6-b99f-8cc34a5939b5" (UID: "81bf5937-61dd-49d6-b99f-8cc34a5939b5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247726 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247820 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247833 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247848 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4vl\" (UniqueName: \"kubernetes.io/projected/ca249de4-0737-4c42-b2e9-78a0abf2bf94-kube-api-access-hr4vl\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247859 5024 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247867 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwgp\" (UniqueName: \"kubernetes.io/projected/81bf5937-61dd-49d6-b99f-8cc34a5939b5-kube-api-access-qhwgp\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247875 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5937-61dd-49d6-b99f-8cc34a5939b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247883 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca249de4-0737-4c42-b2e9-78a0abf2bf94-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.247891 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5937-61dd-49d6-b99f-8cc34a5939b5-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.593507 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bf5937-61dd-49d6-b99f-8cc34a5939b5","Type":"ContainerDied","Data":"ffb7dab466b00c071944be5fac196c4a0c1af6d9712c240b6f4ec5e55e562392"} Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.593962 5024 scope.go:117] "RemoveContainer" containerID="057a68ce4aa4280d55cec9a0c7f8b0bd553774040c4981793d8167202db6fb0e" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.593629 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.596764 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" event={"ID":"ca249de4-0737-4c42-b2e9-78a0abf2bf94","Type":"ContainerDied","Data":"c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299"} Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.596800 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c285e4c7d727fe3c5a84f750b73032284711027257b494d45264494103d2c299" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.596868 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bm8ht" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.611285 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fb081f8-d831-4362-8169-d4b183854adc" containerID="dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5" exitCode=143 Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.611464 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerName="nova-scheduler-scheduler" containerID="cri-o://150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" gracePeriod=30 Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.611746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerDied","Data":"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5"} Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636349 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636795 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-log" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636817 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-log" Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636828 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="init" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636835 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="init" Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636859 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f46fbe-963b-4d59-b9a8-3d02e31157a3" containerName="nova-manage" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636879 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f46fbe-963b-4d59-b9a8-3d02e31157a3" containerName="nova-manage" Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636907 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca249de4-0737-4c42-b2e9-78a0abf2bf94" containerName="nova-cell1-conductor-db-sync" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636916 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca249de4-0737-4c42-b2e9-78a0abf2bf94" containerName="nova-cell1-conductor-db-sync" Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636932 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="dnsmasq-dns" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636939 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="dnsmasq-dns" Oct 07 12:48:16 crc kubenswrapper[5024]: E1007 12:48:16.636958 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-metadata" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.636965 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-metadata" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.637278 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-metadata" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.637296 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f46fbe-963b-4d59-b9a8-3d02e31157a3" containerName="nova-manage" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.637307 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" containerName="nova-metadata-log" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.637321 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="dnsmasq-dns" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.637337 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca249de4-0737-4c42-b2e9-78a0abf2bf94" containerName="nova-cell1-conductor-db-sync" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.638106 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.642493 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.673467 5024 scope.go:117] "RemoveContainer" containerID="c8d7cb3be33726418188f0cced5d15f279a3c4b749da780a2e42ed37959c55a3" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.686038 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.699517 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.738890 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.757507 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.757600 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.757651 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9hr\" (UniqueName: \"kubernetes.io/projected/c64c06f8-a7f8-45a5-ae11-14f952a5304c-kube-api-access-8g9hr\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.763307 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebb6498-3228-4baa-984b-d76104094326" path="/var/lib/kubelet/pods/0ebb6498-3228-4baa-984b-d76104094326/volumes" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.763947 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bf5937-61dd-49d6-b99f-8cc34a5939b5" path="/var/lib/kubelet/pods/81bf5937-61dd-49d6-b99f-8cc34a5939b5/volumes" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.764814 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.767017 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.776021 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.778983 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.780155 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.859948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cgwn\" (UniqueName: \"kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860171 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860371 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860438 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860600 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g9hr\" (UniqueName: \"kubernetes.io/projected/c64c06f8-a7f8-45a5-ae11-14f952a5304c-kube-api-access-8g9hr\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.860679 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.865704 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.866255 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c64c06f8-a7f8-45a5-ae11-14f952a5304c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.879674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g9hr\" (UniqueName: \"kubernetes.io/projected/c64c06f8-a7f8-45a5-ae11-14f952a5304c-kube-api-access-8g9hr\") pod \"nova-cell1-conductor-0\" (UID: \"c64c06f8-a7f8-45a5-ae11-14f952a5304c\") " pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.962969 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.963081 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.963226 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.963254 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cgwn\" (UniqueName: \"kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.963285 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.963928 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.967426 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.968080 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.976883 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:16 crc kubenswrapper[5024]: I1007 12:48:16.981128 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cgwn\" (UniqueName: \"kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn\") pod \"nova-metadata-0\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " pod="openstack/nova-metadata-0" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.013696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.090777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.300602 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.606677 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:17 crc kubenswrapper[5024]: W1007 12:48:17.613721 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0a23fee_3395_44fa_9ce5_71a1530a2910.slice/crio-d7930be1dd79d86e487256f7755e68c413c558b54fd8a6ee8b41028b8bf4ed4a WatchSource:0}: Error finding container d7930be1dd79d86e487256f7755e68c413c558b54fd8a6ee8b41028b8bf4ed4a: Status 404 returned error can't find the container with id d7930be1dd79d86e487256f7755e68c413c558b54fd8a6ee8b41028b8bf4ed4a Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.623874 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c64c06f8-a7f8-45a5-ae11-14f952a5304c","Type":"ContainerStarted","Data":"b9472665ce2da25758fbc2fad5adc4cc045b7cec8f09ab9cd46df69a8e5cbff3"} Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.623916 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c64c06f8-a7f8-45a5-ae11-14f952a5304c","Type":"ContainerStarted","Data":"1a4cd2a0f035d38aaeebfa133b3711b13b57e51c31cc23893e907668527607fa"} Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.623932 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.629004 5024 generic.go:334] "Generic (PLEG): container finished" podID="66a6990d-f48d-401c-953b-b09b8a654b10" containerID="3ba76ed76296e33904aad24b337568a9f178fd640c4586461cb934f4661b306c" exitCode=0 Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.629037 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerDied","Data":"3ba76ed76296e33904aad24b337568a9f178fd640c4586461cb934f4661b306c"} Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.637823 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.648261 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.6482452269999999 podStartE2EDuration="1.648245227s" podCreationTimestamp="2025-10-07 12:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:17.643096625 +0000 UTC m=+1235.718883463" watchObservedRunningTime="2025-10-07 12:48:17.648245227 +0000 UTC m=+1235.724032065" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777295 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777542 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777583 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777609 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777660 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777688 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6689z\" (UniqueName: \"kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.777835 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data\") pod \"66a6990d-f48d-401c-953b-b09b8a654b10\" (UID: \"66a6990d-f48d-401c-953b-b09b8a654b10\") " Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.778387 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.779074 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.782862 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts" (OuterVolumeSpecName: "scripts") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.785351 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z" (OuterVolumeSpecName: "kube-api-access-6689z") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "kube-api-access-6689z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.808666 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.877823 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880052 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880079 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880089 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66a6990d-f48d-401c-953b-b09b8a654b10-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880099 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880109 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.880117 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6689z\" (UniqueName: \"kubernetes.io/projected/66a6990d-f48d-401c-953b-b09b8a654b10-kube-api-access-6689z\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.919209 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data" (OuterVolumeSpecName: "config-data") pod "66a6990d-f48d-401c-953b-b09b8a654b10" (UID: "66a6990d-f48d-401c-953b-b09b8a654b10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:17 crc kubenswrapper[5024]: I1007 12:48:17.981779 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66a6990d-f48d-401c-953b-b09b8a654b10-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.640243 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66a6990d-f48d-401c-953b-b09b8a654b10","Type":"ContainerDied","Data":"24f03a44b790746cfca504a38522e038ee19144104c43e3bb96577944537f852"} Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.640263 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.640295 5024 scope.go:117] "RemoveContainer" containerID="055b2675d510ed984215458ba9a34bdcb701fd3d2b5b6e199169ed4ced68bd39" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.644204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerStarted","Data":"6be90aea6ac8a8bde144dace9ec4f3b15791e4829002170b796023f2d6e26d26"} Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.644269 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerStarted","Data":"eca43c671e1b9b2e9f7cdeaaffc0c105ba429f871bedbdffa310f93e73d79822"} Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.644287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerStarted","Data":"d7930be1dd79d86e487256f7755e68c413c558b54fd8a6ee8b41028b8bf4ed4a"} Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.668386 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.66837067 podStartE2EDuration="2.66837067s" podCreationTimestamp="2025-10-07 12:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:18.668266887 +0000 UTC m=+1236.744053735" watchObservedRunningTime="2025-10-07 12:48:18.66837067 +0000 UTC m=+1236.744157508" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.691388 5024 scope.go:117] "RemoveContainer" containerID="8427f622c9f8076465aaeb85e6ceda15375c0c4f7a75799f3653136d09be222d" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.700143 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.715448 5024 scope.go:117] "RemoveContainer" containerID="3ba76ed76296e33904aad24b337568a9f178fd640c4586461cb934f4661b306c" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.721494 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.770516 5024 scope.go:117] "RemoveContainer" containerID="6cbfe7a575786b01693dff43ba9d22ccab005c0adb7d202c7dd5ddd331b2217e" Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.798025 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.800717 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.804614 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.804689 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerName="nova-scheduler-scheduler" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.837862 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" path="/var/lib/kubelet/pods/66a6990d-f48d-401c-953b-b09b8a654b10/volumes" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.838602 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.838867 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-notification-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.838884 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-notification-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.838898 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-central-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.838905 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-central-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.838917 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="proxy-httpd" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.838923 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="proxy-httpd" Oct 07 12:48:18 crc kubenswrapper[5024]: E1007 12:48:18.838942 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="sg-core" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.838947 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="sg-core" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.839119 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="sg-core" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.839138 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-central-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.839169 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="ceilometer-notification-agent" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.839178 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6990d-f48d-401c-953b-b09b8a654b10" containerName="proxy-httpd" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.842092 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.842233 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.844783 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.844950 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 07 12:48:18 crc kubenswrapper[5024]: I1007 12:48:18.845038 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:18.999963 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000314 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29lg\" (UniqueName: \"kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000361 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000485 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000531 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000665 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000793 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.000819 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103014 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103076 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103169 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l29lg\" (UniqueName: \"kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103312 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.103341 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.104033 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.104791 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.109939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.110191 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.110293 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.112651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.120598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.121854 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l29lg\" (UniqueName: \"kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg\") pod \"ceilometer-0\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.168086 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.525532 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d97fcdd8f-s98w8" podUID="0ebb6498-3228-4baa-984b-d76104094326" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.613377 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.666277 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerStarted","Data":"9bac922b4d06770661ba916635475da4530ee7c8dc448cebd63d78e0b3e2d71a"} Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.696581 5024 generic.go:334] "Generic (PLEG): container finished" podID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerID="150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" exitCode=0 Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.696709 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aea4531c-c489-4110-9ce3-ddffc63d0ab8","Type":"ContainerDied","Data":"150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00"} Oct 07 12:48:19 crc kubenswrapper[5024]: I1007 12:48:19.921231 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.044900 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbcrk\" (UniqueName: \"kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk\") pod \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.045411 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data\") pod \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.045457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle\") pod \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\" (UID: \"aea4531c-c489-4110-9ce3-ddffc63d0ab8\") " Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.050818 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk" (OuterVolumeSpecName: "kube-api-access-fbcrk") pod "aea4531c-c489-4110-9ce3-ddffc63d0ab8" (UID: "aea4531c-c489-4110-9ce3-ddffc63d0ab8"). InnerVolumeSpecName "kube-api-access-fbcrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.073474 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data" (OuterVolumeSpecName: "config-data") pod "aea4531c-c489-4110-9ce3-ddffc63d0ab8" (UID: "aea4531c-c489-4110-9ce3-ddffc63d0ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.086446 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aea4531c-c489-4110-9ce3-ddffc63d0ab8" (UID: "aea4531c-c489-4110-9ce3-ddffc63d0ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.148219 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.148282 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbcrk\" (UniqueName: \"kubernetes.io/projected/aea4531c-c489-4110-9ce3-ddffc63d0ab8-kube-api-access-fbcrk\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.148295 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4531c-c489-4110-9ce3-ddffc63d0ab8-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.711603 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aea4531c-c489-4110-9ce3-ddffc63d0ab8","Type":"ContainerDied","Data":"0bf3a7ecee8034f0372311c574b2a502388f73ca45c226d69159069300d9b3ad"} Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.711675 5024 scope.go:117] "RemoveContainer" containerID="150f597e768263efee961e5fbbc14af0125b9083aa0b34c27bf182be9d5bfa00" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.711698 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.715562 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerStarted","Data":"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db"} Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.798290 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.809462 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.819605 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:20 crc kubenswrapper[5024]: E1007 12:48:20.820268 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerName="nova-scheduler-scheduler" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.820296 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerName="nova-scheduler-scheduler" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.820782 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" containerName="nova-scheduler-scheduler" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.821561 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.826108 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.846961 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.964842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.964911 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:20 crc kubenswrapper[5024]: I1007 12:48:20.964935 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.067100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.067467 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.067498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.071916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.075623 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.084933 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr\") pod \"nova-scheduler-0\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.229593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.406598 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.578900 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs\") pod \"8fb081f8-d831-4362-8169-d4b183854adc\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.579692 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs" (OuterVolumeSpecName: "logs") pod "8fb081f8-d831-4362-8169-d4b183854adc" (UID: "8fb081f8-d831-4362-8169-d4b183854adc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.579759 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data\") pod \"8fb081f8-d831-4362-8169-d4b183854adc\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.579802 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle\") pod \"8fb081f8-d831-4362-8169-d4b183854adc\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.580596 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-884hb\" (UniqueName: \"kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb\") pod \"8fb081f8-d831-4362-8169-d4b183854adc\" (UID: \"8fb081f8-d831-4362-8169-d4b183854adc\") " Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.580955 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb081f8-d831-4362-8169-d4b183854adc-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.584834 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb" (OuterVolumeSpecName: "kube-api-access-884hb") pod "8fb081f8-d831-4362-8169-d4b183854adc" (UID: "8fb081f8-d831-4362-8169-d4b183854adc"). InnerVolumeSpecName "kube-api-access-884hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.616025 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data" (OuterVolumeSpecName: "config-data") pod "8fb081f8-d831-4362-8169-d4b183854adc" (UID: "8fb081f8-d831-4362-8169-d4b183854adc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.616142 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fb081f8-d831-4362-8169-d4b183854adc" (UID: "8fb081f8-d831-4362-8169-d4b183854adc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.684425 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.684474 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-884hb\" (UniqueName: \"kubernetes.io/projected/8fb081f8-d831-4362-8169-d4b183854adc-kube-api-access-884hb\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.684489 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb081f8-d831-4362-8169-d4b183854adc-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.745737 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerStarted","Data":"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b"} Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.753396 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fb081f8-d831-4362-8169-d4b183854adc" containerID="56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de" exitCode=0 Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.753445 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerDied","Data":"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de"} Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.753475 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb081f8-d831-4362-8169-d4b183854adc","Type":"ContainerDied","Data":"138dd63d04382e6eb7ed2091d2f658b3053d4cc6c0535c8fd080680e930640ff"} Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.753493 5024 scope.go:117] "RemoveContainer" containerID="56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.753522 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.793917 5024 scope.go:117] "RemoveContainer" containerID="dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.816587 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.960664 5024 scope.go:117] "RemoveContainer" containerID="56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de" Oct 07 12:48:21 crc kubenswrapper[5024]: E1007 12:48:21.961699 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de\": container with ID starting with 56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de not found: ID does not exist" containerID="56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.961746 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de"} err="failed to get container status \"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de\": rpc error: code = NotFound desc = could not find container \"56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de\": container with ID starting with 56e241734d99c14216496e5b7c4ed071b87cd2b77203d6d25df03a61dc4090de not found: ID does not exist" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.961787 5024 scope.go:117] "RemoveContainer" containerID="dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5" Oct 07 12:48:21 crc kubenswrapper[5024]: E1007 12:48:21.962509 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5\": container with ID starting with dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5 not found: ID does not exist" containerID="dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.962545 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5"} err="failed to get container status \"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5\": rpc error: code = NotFound desc = could not find container \"dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5\": container with ID starting with dd55042b83ed36c4af6d7201255cbd764156a213a3e449f2ebaa3308297d4ff5 not found: ID does not exist" Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.984837 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:21 crc kubenswrapper[5024]: I1007 12:48:21.992792 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.015847 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:22 crc kubenswrapper[5024]: E1007 12:48:22.016370 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-api" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.016393 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-api" Oct 07 12:48:22 crc kubenswrapper[5024]: E1007 12:48:22.016421 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-log" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.016430 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-log" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.016663 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-api" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.016692 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb081f8-d831-4362-8169-d4b183854adc" containerName="nova-api-log" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.017898 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.023434 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.023732 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.080742 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.098282 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.098391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7h2d\" (UniqueName: \"kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.098476 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.098516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.099042 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.099722 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.200405 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.200534 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.200653 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7h2d\" (UniqueName: \"kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.200730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.201292 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.205421 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.207867 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.217437 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7h2d\" (UniqueName: \"kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d\") pod \"nova-api-0\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.338363 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.763505 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb081f8-d831-4362-8169-d4b183854adc" path="/var/lib/kubelet/pods/8fb081f8-d831-4362-8169-d4b183854adc/volumes" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.765649 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea4531c-c489-4110-9ce3-ddffc63d0ab8" path="/var/lib/kubelet/pods/aea4531c-c489-4110-9ce3-ddffc63d0ab8/volumes" Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.777968 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerStarted","Data":"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b"} Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.783844 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6eec8c08-d884-41c0-b9da-af69d056c96a","Type":"ContainerStarted","Data":"02c9080c4f6e269679bed641e973d244a3dd0ee4d532f6d0a308638b7aaeeb99"} Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.783890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6eec8c08-d884-41c0-b9da-af69d056c96a","Type":"ContainerStarted","Data":"ff0bfed6059a7ed0323fc1fa5cd963372c00934394c122bc3d2c02f31b049532"} Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.786765 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:22 crc kubenswrapper[5024]: W1007 12:48:22.790759 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff6cacb_2047_4492_8122_735dc9d4f310.slice/crio-68ccb8fbeb7550d00cfe7a71a89c15124eeba55be8da44eccb1ef9b90bb114ef WatchSource:0}: Error finding container 68ccb8fbeb7550d00cfe7a71a89c15124eeba55be8da44eccb1ef9b90bb114ef: Status 404 returned error can't find the container with id 68ccb8fbeb7550d00cfe7a71a89c15124eeba55be8da44eccb1ef9b90bb114ef Oct 07 12:48:22 crc kubenswrapper[5024]: I1007 12:48:22.826222 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.826204679 podStartE2EDuration="2.826204679s" podCreationTimestamp="2025-10-07 12:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:22.821085139 +0000 UTC m=+1240.896871977" watchObservedRunningTime="2025-10-07 12:48:22.826204679 +0000 UTC m=+1240.901991517" Oct 07 12:48:23 crc kubenswrapper[5024]: I1007 12:48:23.791890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerStarted","Data":"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576"} Oct 07 12:48:23 crc kubenswrapper[5024]: I1007 12:48:23.792515 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerStarted","Data":"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336"} Oct 07 12:48:23 crc kubenswrapper[5024]: I1007 12:48:23.792530 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerStarted","Data":"68ccb8fbeb7550d00cfe7a71a89c15124eeba55be8da44eccb1ef9b90bb114ef"} Oct 07 12:48:23 crc kubenswrapper[5024]: I1007 12:48:23.825356 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.825334996 podStartE2EDuration="2.825334996s" podCreationTimestamp="2025-10-07 12:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:23.823232124 +0000 UTC m=+1241.899018962" watchObservedRunningTime="2025-10-07 12:48:23.825334996 +0000 UTC m=+1241.901121854" Oct 07 12:48:23 crc kubenswrapper[5024]: I1007 12:48:23.882458 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 07 12:48:24 crc kubenswrapper[5024]: I1007 12:48:24.803859 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerStarted","Data":"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72"} Oct 07 12:48:24 crc kubenswrapper[5024]: I1007 12:48:24.804415 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:48:26 crc kubenswrapper[5024]: I1007 12:48:26.230508 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 07 12:48:27 crc kubenswrapper[5024]: I1007 12:48:27.092249 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 07 12:48:27 crc kubenswrapper[5024]: I1007 12:48:27.092294 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 07 12:48:28 crc kubenswrapper[5024]: I1007 12:48:28.100461 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:28 crc kubenswrapper[5024]: I1007 12:48:28.108330 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:31 crc kubenswrapper[5024]: I1007 12:48:31.230877 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 07 12:48:31 crc kubenswrapper[5024]: I1007 12:48:31.257290 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 07 12:48:31 crc kubenswrapper[5024]: I1007 12:48:31.279013 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.879315456 podStartE2EDuration="13.278990215s" podCreationTimestamp="2025-10-07 12:48:18 +0000 UTC" firstStartedPulling="2025-10-07 12:48:19.643422529 +0000 UTC m=+1237.719209367" lastFinishedPulling="2025-10-07 12:48:24.043097288 +0000 UTC m=+1242.118884126" observedRunningTime="2025-10-07 12:48:24.824859714 +0000 UTC m=+1242.900646562" watchObservedRunningTime="2025-10-07 12:48:31.278990215 +0000 UTC m=+1249.354777053" Oct 07 12:48:31 crc kubenswrapper[5024]: I1007 12:48:31.883029 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 07 12:48:32 crc kubenswrapper[5024]: I1007 12:48:32.339593 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:48:32 crc kubenswrapper[5024]: I1007 12:48:32.339916 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:48:33 crc kubenswrapper[5024]: I1007 12:48:33.421361 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:33 crc kubenswrapper[5024]: I1007 12:48:33.421372 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 07 12:48:37 crc kubenswrapper[5024]: I1007 12:48:37.097967 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 07 12:48:37 crc kubenswrapper[5024]: I1007 12:48:37.101791 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 07 12:48:37 crc kubenswrapper[5024]: I1007 12:48:37.103620 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 07 12:48:37 crc kubenswrapper[5024]: I1007 12:48:37.922471 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.806081 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.919321 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data\") pod \"618f045f-fd3a-43df-a6fb-94db233769df\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.920076 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fptv8\" (UniqueName: \"kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8\") pod \"618f045f-fd3a-43df-a6fb-94db233769df\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.920130 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle\") pod \"618f045f-fd3a-43df-a6fb-94db233769df\" (UID: \"618f045f-fd3a-43df-a6fb-94db233769df\") " Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.929625 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8" (OuterVolumeSpecName: "kube-api-access-fptv8") pod "618f045f-fd3a-43df-a6fb-94db233769df" (UID: "618f045f-fd3a-43df-a6fb-94db233769df"). InnerVolumeSpecName "kube-api-access-fptv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.934987 5024 generic.go:334] "Generic (PLEG): container finished" podID="618f045f-fd3a-43df-a6fb-94db233769df" containerID="63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40" exitCode=137 Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.935196 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"618f045f-fd3a-43df-a6fb-94db233769df","Type":"ContainerDied","Data":"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40"} Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.935239 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"618f045f-fd3a-43df-a6fb-94db233769df","Type":"ContainerDied","Data":"2fc66c1807d1309544f1a8faf7e687532bc1729eb5dcbea9f293413de91bc52d"} Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.935329 5024 scope.go:117] "RemoveContainer" containerID="63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.935685 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.950819 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data" (OuterVolumeSpecName: "config-data") pod "618f045f-fd3a-43df-a6fb-94db233769df" (UID: "618f045f-fd3a-43df-a6fb-94db233769df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:38 crc kubenswrapper[5024]: I1007 12:48:38.951550 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "618f045f-fd3a-43df-a6fb-94db233769df" (UID: "618f045f-fd3a-43df-a6fb-94db233769df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.022928 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.022967 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fptv8\" (UniqueName: \"kubernetes.io/projected/618f045f-fd3a-43df-a6fb-94db233769df-kube-api-access-fptv8\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.022982 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618f045f-fd3a-43df-a6fb-94db233769df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.023677 5024 scope.go:117] "RemoveContainer" containerID="63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40" Oct 07 12:48:39 crc kubenswrapper[5024]: E1007 12:48:39.024439 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40\": container with ID starting with 63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40 not found: ID does not exist" containerID="63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.024493 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40"} err="failed to get container status \"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40\": rpc error: code = NotFound desc = could not find container \"63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40\": container with ID starting with 63b911d81b80ec741bde40682ff5b06a66288320489dca4e4cbfd92547d1ed40 not found: ID does not exist" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.267090 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.274098 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.290346 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:39 crc kubenswrapper[5024]: E1007 12:48:39.290850 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618f045f-fd3a-43df-a6fb-94db233769df" containerName="nova-cell1-novncproxy-novncproxy" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.290874 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="618f045f-fd3a-43df-a6fb-94db233769df" containerName="nova-cell1-novncproxy-novncproxy" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.291080 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="618f045f-fd3a-43df-a6fb-94db233769df" containerName="nova-cell1-novncproxy-novncproxy" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.292589 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.294545 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.294865 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.295041 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.300820 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.429098 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.429169 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.429195 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.429317 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.429449 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzscm\" (UniqueName: \"kubernetes.io/projected/04e53d1a-544a-462f-b10a-792855684c25-kube-api-access-kzscm\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.530819 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.530875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.530897 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.530921 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.530945 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzscm\" (UniqueName: \"kubernetes.io/projected/04e53d1a-544a-462f-b10a-792855684c25-kube-api-access-kzscm\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.535331 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.536108 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.539207 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.545764 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e53d1a-544a-462f-b10a-792855684c25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.548257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzscm\" (UniqueName: \"kubernetes.io/projected/04e53d1a-544a-462f-b10a-792855684c25-kube-api-access-kzscm\") pod \"nova-cell1-novncproxy-0\" (UID: \"04e53d1a-544a-462f-b10a-792855684c25\") " pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:39 crc kubenswrapper[5024]: I1007 12:48:39.618542 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:40 crc kubenswrapper[5024]: I1007 12:48:40.045378 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 07 12:48:40 crc kubenswrapper[5024]: W1007 12:48:40.051433 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e53d1a_544a_462f_b10a_792855684c25.slice/crio-3a196269371a8b264ca88f88743f37da4abb0c60ff73ef342adbf751d685de0d WatchSource:0}: Error finding container 3a196269371a8b264ca88f88743f37da4abb0c60ff73ef342adbf751d685de0d: Status 404 returned error can't find the container with id 3a196269371a8b264ca88f88743f37da4abb0c60ff73ef342adbf751d685de0d Oct 07 12:48:40 crc kubenswrapper[5024]: I1007 12:48:40.784579 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="618f045f-fd3a-43df-a6fb-94db233769df" path="/var/lib/kubelet/pods/618f045f-fd3a-43df-a6fb-94db233769df/volumes" Oct 07 12:48:40 crc kubenswrapper[5024]: I1007 12:48:40.963819 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04e53d1a-544a-462f-b10a-792855684c25","Type":"ContainerStarted","Data":"5a729cff16280b8feb7dec890fd5413ed85454a14ba3b3fc215f9cf8244b6fee"} Oct 07 12:48:40 crc kubenswrapper[5024]: I1007 12:48:40.964202 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04e53d1a-544a-462f-b10a-792855684c25","Type":"ContainerStarted","Data":"3a196269371a8b264ca88f88743f37da4abb0c60ff73ef342adbf751d685de0d"} Oct 07 12:48:40 crc kubenswrapper[5024]: I1007 12:48:40.988081 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9880338640000002 podStartE2EDuration="1.988033864s" podCreationTimestamp="2025-10-07 12:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:40.98107583 +0000 UTC m=+1259.056862668" watchObservedRunningTime="2025-10-07 12:48:40.988033864 +0000 UTC m=+1259.063820702" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.344100 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.344261 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.344803 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.344851 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.349665 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.350621 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.545860 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.548940 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.573291 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.687055 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.687121 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.687169 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd98f\" (UniqueName: \"kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.687194 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.687250 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.789465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.789557 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.789597 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd98f\" (UniqueName: \"kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.789623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.789652 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.790525 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.790532 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.790845 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.791292 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.807790 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd98f\" (UniqueName: \"kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f\") pod \"dnsmasq-dns-5b856c5697-vb5v7\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:42 crc kubenswrapper[5024]: I1007 12:48:42.893875 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:43 crc kubenswrapper[5024]: I1007 12:48:43.356817 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:48:43 crc kubenswrapper[5024]: W1007 12:48:43.359707 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6052e097_7c9e_4277_a920_72d8923dc001.slice/crio-cdfc4f7f589e2c821dbe446c40f36218121c22f183ad688c2f1882ef50ed5183 WatchSource:0}: Error finding container cdfc4f7f589e2c821dbe446c40f36218121c22f183ad688c2f1882ef50ed5183: Status 404 returned error can't find the container with id cdfc4f7f589e2c821dbe446c40f36218121c22f183ad688c2f1882ef50ed5183 Oct 07 12:48:43 crc kubenswrapper[5024]: I1007 12:48:43.997642 5024 generic.go:334] "Generic (PLEG): container finished" podID="6052e097-7c9e-4277-a920-72d8923dc001" containerID="2a55f44791aeb6318214eed5416723e94fe3ed798bac3887fc83f52740535c9c" exitCode=0 Oct 07 12:48:43 crc kubenswrapper[5024]: I1007 12:48:43.997753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" event={"ID":"6052e097-7c9e-4277-a920-72d8923dc001","Type":"ContainerDied","Data":"2a55f44791aeb6318214eed5416723e94fe3ed798bac3887fc83f52740535c9c"} Oct 07 12:48:43 crc kubenswrapper[5024]: I1007 12:48:43.998118 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" event={"ID":"6052e097-7c9e-4277-a920-72d8923dc001","Type":"ContainerStarted","Data":"cdfc4f7f589e2c821dbe446c40f36218121c22f183ad688c2f1882ef50ed5183"} Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.619179 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.652425 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.652705 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-central-agent" containerID="cri-o://c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db" gracePeriod=30 Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.652810 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="proxy-httpd" containerID="cri-o://fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72" gracePeriod=30 Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.652874 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="sg-core" containerID="cri-o://3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b" gracePeriod=30 Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.652946 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-notification-agent" containerID="cri-o://a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b" gracePeriod=30 Oct 07 12:48:44 crc kubenswrapper[5024]: I1007 12:48:44.659198 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.180:3000/\": EOF" Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.010441 5024 generic.go:334] "Generic (PLEG): container finished" podID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerID="fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72" exitCode=0 Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.011217 5024 generic.go:334] "Generic (PLEG): container finished" podID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerID="3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b" exitCode=2 Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.010596 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerDied","Data":"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72"} Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.011468 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerDied","Data":"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b"} Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.013452 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" event={"ID":"6052e097-7c9e-4277-a920-72d8923dc001","Type":"ContainerStarted","Data":"59edada1083546ba59e2f4f7892edd7cbef340c0566cf7f667f6255d691f6f77"} Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.014419 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.038693 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" podStartSLOduration=3.038671728 podStartE2EDuration="3.038671728s" podCreationTimestamp="2025-10-07 12:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:45.033580579 +0000 UTC m=+1263.109367417" watchObservedRunningTime="2025-10-07 12:48:45.038671728 +0000 UTC m=+1263.114458566" Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.181639 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.181875 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-log" containerID="cri-o://f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336" gracePeriod=30 Oct 07 12:48:45 crc kubenswrapper[5024]: I1007 12:48:45.182062 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-api" containerID="cri-o://24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576" gracePeriod=30 Oct 07 12:48:46 crc kubenswrapper[5024]: I1007 12:48:46.023753 5024 generic.go:334] "Generic (PLEG): container finished" podID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerID="c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db" exitCode=0 Oct 07 12:48:46 crc kubenswrapper[5024]: I1007 12:48:46.023804 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerDied","Data":"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db"} Oct 07 12:48:46 crc kubenswrapper[5024]: I1007 12:48:46.026533 5024 generic.go:334] "Generic (PLEG): container finished" podID="aff6cacb-2047-4492-8122-735dc9d4f310" containerID="f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336" exitCode=143 Oct 07 12:48:46 crc kubenswrapper[5024]: I1007 12:48:46.026612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerDied","Data":"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336"} Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.711123 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.785961 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786167 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l29lg\" (UniqueName: \"kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786196 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786318 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786364 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786552 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786899 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786954 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.786997 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.787053 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data\") pod \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\" (UID: \"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb\") " Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.788912 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.788933 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.795492 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg" (OuterVolumeSpecName: "kube-api-access-l29lg") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "kube-api-access-l29lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.795611 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts" (OuterVolumeSpecName: "scripts") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.814565 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.842446 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.857802 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.901975 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data" (OuterVolumeSpecName: "config-data") pod "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" (UID: "07e79f2d-9c4e-4eb4-991f-1bb0de27afbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911355 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911397 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911407 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911418 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911428 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l29lg\" (UniqueName: \"kubernetes.io/projected/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-kube-api-access-l29lg\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:47 crc kubenswrapper[5024]: I1007 12:48:47.911442 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.047988 5024 generic.go:334] "Generic (PLEG): container finished" podID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerID="a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b" exitCode=0 Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.048027 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerDied","Data":"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b"} Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.048072 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.048102 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07e79f2d-9c4e-4eb4-991f-1bb0de27afbb","Type":"ContainerDied","Data":"9bac922b4d06770661ba916635475da4530ee7c8dc448cebd63d78e0b3e2d71a"} Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.048168 5024 scope.go:117] "RemoveContainer" containerID="fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.074434 5024 scope.go:117] "RemoveContainer" containerID="3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.098238 5024 scope.go:117] "RemoveContainer" containerID="a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.099203 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.115287 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.130431 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.130796 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="proxy-httpd" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.130812 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="proxy-httpd" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.130831 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-central-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.130837 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-central-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.130847 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="sg-core" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.130854 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="sg-core" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.130879 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-notification-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.130884 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-notification-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.131052 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="proxy-httpd" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.131064 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-central-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.131081 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="sg-core" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.131095 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" containerName="ceilometer-notification-agent" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.132663 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.135277 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.135587 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.135889 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.139173 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.139989 5024 scope.go:117] "RemoveContainer" containerID="c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.170514 5024 scope.go:117] "RemoveContainer" containerID="fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.171016 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72\": container with ID starting with fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72 not found: ID does not exist" containerID="fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.171049 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72"} err="failed to get container status \"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72\": rpc error: code = NotFound desc = could not find container \"fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72\": container with ID starting with fe8f22d9ce1f63ba5f9eaf91b9a60daf6d4cc6145ffc9829a14a29136d254e72 not found: ID does not exist" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.171070 5024 scope.go:117] "RemoveContainer" containerID="3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.171822 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b\": container with ID starting with 3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b not found: ID does not exist" containerID="3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.171878 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b"} err="failed to get container status \"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b\": rpc error: code = NotFound desc = could not find container \"3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b\": container with ID starting with 3a02db07b64e0ed74e0ade3688902809617b1bd71d67085a3b0dbccc21feba2b not found: ID does not exist" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.171897 5024 scope.go:117] "RemoveContainer" containerID="a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.172313 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b\": container with ID starting with a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b not found: ID does not exist" containerID="a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.172335 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b"} err="failed to get container status \"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b\": rpc error: code = NotFound desc = could not find container \"a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b\": container with ID starting with a9ffa56ea1a9c2057d6080f2f16562c53f651347c0ac92a638cb7211bc84765b not found: ID does not exist" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.172348 5024 scope.go:117] "RemoveContainer" containerID="c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db" Oct 07 12:48:48 crc kubenswrapper[5024]: E1007 12:48:48.172648 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db\": container with ID starting with c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db not found: ID does not exist" containerID="c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.172670 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db"} err="failed to get container status \"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db\": rpc error: code = NotFound desc = could not find container \"c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db\": container with ID starting with c00a225764752c583e63bd66c5f170779a75e6d515b8ad2855cd9a7e9d7ef9db not found: ID does not exist" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.216664 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.216780 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.216822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.217015 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8cc\" (UniqueName: \"kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.217065 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.217279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.217359 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.217482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.318979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.319192 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.319379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.319949 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx8cc\" (UniqueName: \"kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320319 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320408 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320521 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.320901 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.323272 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.323771 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.323872 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.325160 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.326485 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.337586 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx8cc\" (UniqueName: \"kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc\") pod \"ceilometer-0\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.458662 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.763003 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e79f2d-9c4e-4eb4-991f-1bb0de27afbb" path="/var/lib/kubelet/pods/07e79f2d-9c4e-4eb4-991f-1bb0de27afbb/volumes" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.793810 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.933197 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data\") pod \"aff6cacb-2047-4492-8122-735dc9d4f310\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.933412 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle\") pod \"aff6cacb-2047-4492-8122-735dc9d4f310\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.933432 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7h2d\" (UniqueName: \"kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d\") pod \"aff6cacb-2047-4492-8122-735dc9d4f310\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.933469 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs\") pod \"aff6cacb-2047-4492-8122-735dc9d4f310\" (UID: \"aff6cacb-2047-4492-8122-735dc9d4f310\") " Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.934449 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs" (OuterVolumeSpecName: "logs") pod "aff6cacb-2047-4492-8122-735dc9d4f310" (UID: "aff6cacb-2047-4492-8122-735dc9d4f310"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.941521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d" (OuterVolumeSpecName: "kube-api-access-h7h2d") pod "aff6cacb-2047-4492-8122-735dc9d4f310" (UID: "aff6cacb-2047-4492-8122-735dc9d4f310"). InnerVolumeSpecName "kube-api-access-h7h2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.970228 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 12:48:48 crc kubenswrapper[5024]: W1007 12:48:48.976201 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad19176_43cb_4ebc_8605_8d619f32e96b.slice/crio-9cca98d575d7067633a859b94857083830cb49208aeb031c6047776f89f24b95 WatchSource:0}: Error finding container 9cca98d575d7067633a859b94857083830cb49208aeb031c6047776f89f24b95: Status 404 returned error can't find the container with id 9cca98d575d7067633a859b94857083830cb49208aeb031c6047776f89f24b95 Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.977924 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff6cacb-2047-4492-8122-735dc9d4f310" (UID: "aff6cacb-2047-4492-8122-735dc9d4f310"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:48 crc kubenswrapper[5024]: I1007 12:48:48.984754 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data" (OuterVolumeSpecName: "config-data") pod "aff6cacb-2047-4492-8122-735dc9d4f310" (UID: "aff6cacb-2047-4492-8122-735dc9d4f310"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.036089 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff6cacb-2047-4492-8122-735dc9d4f310-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.036129 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.036159 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff6cacb-2047-4492-8122-735dc9d4f310-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.036176 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7h2d\" (UniqueName: \"kubernetes.io/projected/aff6cacb-2047-4492-8122-735dc9d4f310-kube-api-access-h7h2d\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.059963 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerStarted","Data":"9cca98d575d7067633a859b94857083830cb49208aeb031c6047776f89f24b95"} Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.065312 5024 generic.go:334] "Generic (PLEG): container finished" podID="aff6cacb-2047-4492-8122-735dc9d4f310" containerID="24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576" exitCode=0 Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.065463 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerDied","Data":"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576"} Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.065692 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff6cacb-2047-4492-8122-735dc9d4f310","Type":"ContainerDied","Data":"68ccb8fbeb7550d00cfe7a71a89c15124eeba55be8da44eccb1ef9b90bb114ef"} Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.065714 5024 scope.go:117] "RemoveContainer" containerID="24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.065537 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.101628 5024 scope.go:117] "RemoveContainer" containerID="f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.103412 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.131922 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.136833 5024 scope.go:117] "RemoveContainer" containerID="24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576" Oct 07 12:48:49 crc kubenswrapper[5024]: E1007 12:48:49.138274 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576\": container with ID starting with 24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576 not found: ID does not exist" containerID="24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.138327 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576"} err="failed to get container status \"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576\": rpc error: code = NotFound desc = could not find container \"24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576\": container with ID starting with 24b950ee07ad651ebeccb2e4689077ff044fd58dfdefb9ba4c0b533f25cbd576 not found: ID does not exist" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.138363 5024 scope.go:117] "RemoveContainer" containerID="f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336" Oct 07 12:48:49 crc kubenswrapper[5024]: E1007 12:48:49.138779 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336\": container with ID starting with f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336 not found: ID does not exist" containerID="f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.138834 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336"} err="failed to get container status \"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336\": rpc error: code = NotFound desc = could not find container \"f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336\": container with ID starting with f087f99015daf4645cf7c3a556fdf6dc7f5978fb5195e0746406bb925e5f1336 not found: ID does not exist" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.139265 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:49 crc kubenswrapper[5024]: E1007 12:48:49.139766 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-log" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.139788 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-log" Oct 07 12:48:49 crc kubenswrapper[5024]: E1007 12:48:49.139828 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-api" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.139836 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-api" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.140072 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-log" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.140099 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" containerName="nova-api-api" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.141270 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.151311 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.151785 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.151845 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.160386 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239425 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74gn\" (UniqueName: \"kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239547 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239797 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.239904 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341727 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341794 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341843 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341881 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s74gn\" (UniqueName: \"kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.341908 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.342231 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.346275 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.346366 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.346546 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.347098 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.358455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s74gn\" (UniqueName: \"kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn\") pod \"nova-api-0\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.458221 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.619990 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.644494 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:49 crc kubenswrapper[5024]: I1007 12:48:49.940463 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:49 crc kubenswrapper[5024]: W1007 12:48:49.945806 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1133f085_feec_4ed5_9566_d7fe1f19f257.slice/crio-0f060103f4810555563e0a3946e16545bdad1da68122b7fecbc1017dfde7d98c WatchSource:0}: Error finding container 0f060103f4810555563e0a3946e16545bdad1da68122b7fecbc1017dfde7d98c: Status 404 returned error can't find the container with id 0f060103f4810555563e0a3946e16545bdad1da68122b7fecbc1017dfde7d98c Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.080912 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerStarted","Data":"0f060103f4810555563e0a3946e16545bdad1da68122b7fecbc1017dfde7d98c"} Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.082538 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerStarted","Data":"707516c3036a055bf960a5c4c70727477b998fd450cfa93a7c2f93395b9d0435"} Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.098242 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.249599 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hdf5v"] Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.250975 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.252810 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.253582 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.256806 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hdf5v"] Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.363822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.363888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.363925 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2h7\" (UniqueName: \"kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.364285 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.472590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.472792 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.472873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.474312 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2h7\" (UniqueName: \"kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.477917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.479568 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.480209 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.495439 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2h7\" (UniqueName: \"kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7\") pod \"nova-cell1-cell-mapping-hdf5v\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.601262 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:50 crc kubenswrapper[5024]: I1007 12:48:50.767467 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff6cacb-2047-4492-8122-735dc9d4f310" path="/var/lib/kubelet/pods/aff6cacb-2047-4492-8122-735dc9d4f310/volumes" Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.033979 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hdf5v"] Oct 07 12:48:51 crc kubenswrapper[5024]: W1007 12:48:51.042372 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4130a45e_dbcf_40d2_bfe9_b353bff57d17.slice/crio-1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab WatchSource:0}: Error finding container 1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab: Status 404 returned error can't find the container with id 1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.093393 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerStarted","Data":"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33"} Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.093456 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerStarted","Data":"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c"} Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.096860 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerStarted","Data":"e31fe92b5980eae90855f7b387d75664850fbe569754c45ced756cad9e481862"} Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.098896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hdf5v" event={"ID":"4130a45e-dbcf-40d2-bfe9-b353bff57d17","Type":"ContainerStarted","Data":"1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab"} Oct 07 12:48:51 crc kubenswrapper[5024]: I1007 12:48:51.123802 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.1237838079999998 podStartE2EDuration="2.123783808s" podCreationTimestamp="2025-10-07 12:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:51.115555686 +0000 UTC m=+1269.191342524" watchObservedRunningTime="2025-10-07 12:48:51.123783808 +0000 UTC m=+1269.199570646" Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.121752 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerStarted","Data":"ad52b5b371f3cebd67a868f43b4a771168905dd6e8a93e2733037d3f3b66f522"} Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.126169 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hdf5v" event={"ID":"4130a45e-dbcf-40d2-bfe9-b353bff57d17","Type":"ContainerStarted","Data":"9568d1c4a3bc350b753ed7d626d97eb2758ab018d13dab0fdf9a2667368e3153"} Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.799481 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hdf5v" podStartSLOduration=2.799450822 podStartE2EDuration="2.799450822s" podCreationTimestamp="2025-10-07 12:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:48:52.141982184 +0000 UTC m=+1270.217769012" watchObservedRunningTime="2025-10-07 12:48:52.799450822 +0000 UTC m=+1270.875237700" Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.895373 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.953865 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:52 crc kubenswrapper[5024]: I1007 12:48:52.955877 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="dnsmasq-dns" containerID="cri-o://c7fcc188dd585e00581d121e25d565d7a7fecc6aaff4afe87043b0d1a887b04a" gracePeriod=10 Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.145019 5024 generic.go:334] "Generic (PLEG): container finished" podID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerID="c7fcc188dd585e00581d121e25d565d7a7fecc6aaff4afe87043b0d1a887b04a" exitCode=0 Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.145327 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" event={"ID":"f545749d-e342-4e17-85b9-23f17ace4fdf","Type":"ContainerDied","Data":"c7fcc188dd585e00581d121e25d565d7a7fecc6aaff4afe87043b0d1a887b04a"} Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.157917 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerStarted","Data":"1d7d19c0a3eba91dd93ae7770e4ff472fc0407cfb5d5c8a699a36f5d909c92e5"} Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.158062 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.192983 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.588505064 podStartE2EDuration="5.192962232s" podCreationTimestamp="2025-10-07 12:48:48 +0000 UTC" firstStartedPulling="2025-10-07 12:48:48.979356075 +0000 UTC m=+1267.055142913" lastFinishedPulling="2025-10-07 12:48:52.583813243 +0000 UTC m=+1270.659600081" observedRunningTime="2025-10-07 12:48:53.187968276 +0000 UTC m=+1271.263755114" watchObservedRunningTime="2025-10-07 12:48:53.192962232 +0000 UTC m=+1271.268749070" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.413020 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.548886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bf5r\" (UniqueName: \"kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r\") pod \"f545749d-e342-4e17-85b9-23f17ace4fdf\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.548964 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc\") pod \"f545749d-e342-4e17-85b9-23f17ace4fdf\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.549011 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb\") pod \"f545749d-e342-4e17-85b9-23f17ace4fdf\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.549736 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb\") pod \"f545749d-e342-4e17-85b9-23f17ace4fdf\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.549819 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config\") pod \"f545749d-e342-4e17-85b9-23f17ace4fdf\" (UID: \"f545749d-e342-4e17-85b9-23f17ace4fdf\") " Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.553449 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r" (OuterVolumeSpecName: "kube-api-access-6bf5r") pod "f545749d-e342-4e17-85b9-23f17ace4fdf" (UID: "f545749d-e342-4e17-85b9-23f17ace4fdf"). InnerVolumeSpecName "kube-api-access-6bf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.595626 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f545749d-e342-4e17-85b9-23f17ace4fdf" (UID: "f545749d-e342-4e17-85b9-23f17ace4fdf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.599801 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config" (OuterVolumeSpecName: "config") pod "f545749d-e342-4e17-85b9-23f17ace4fdf" (UID: "f545749d-e342-4e17-85b9-23f17ace4fdf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.609308 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f545749d-e342-4e17-85b9-23f17ace4fdf" (UID: "f545749d-e342-4e17-85b9-23f17ace4fdf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.609749 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f545749d-e342-4e17-85b9-23f17ace4fdf" (UID: "f545749d-e342-4e17-85b9-23f17ace4fdf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.651475 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bf5r\" (UniqueName: \"kubernetes.io/projected/f545749d-e342-4e17-85b9-23f17ace4fdf-kube-api-access-6bf5r\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.651506 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.651517 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.651526 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:53 crc kubenswrapper[5024]: I1007 12:48:53.651535 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f545749d-e342-4e17-85b9-23f17ace4fdf-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.161391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" event={"ID":"f545749d-e342-4e17-85b9-23f17ace4fdf","Type":"ContainerDied","Data":"e7f23654d6dbcb6e9cf809624484692c3cb1b4aabdee93f8dcb9568fdbbb5471"} Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.161423 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-j5fsn" Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.161470 5024 scope.go:117] "RemoveContainer" containerID="c7fcc188dd585e00581d121e25d565d7a7fecc6aaff4afe87043b0d1a887b04a" Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.194554 5024 scope.go:117] "RemoveContainer" containerID="591f7250acdd5d3b293f1b98aee24e91351c2a95e6199891995e0bdaa8f62b01" Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.199785 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.207022 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-j5fsn"] Oct 07 12:48:54 crc kubenswrapper[5024]: I1007 12:48:54.768550 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" path="/var/lib/kubelet/pods/f545749d-e342-4e17-85b9-23f17ace4fdf/volumes" Oct 07 12:48:56 crc kubenswrapper[5024]: I1007 12:48:56.202314 5024 generic.go:334] "Generic (PLEG): container finished" podID="4130a45e-dbcf-40d2-bfe9-b353bff57d17" containerID="9568d1c4a3bc350b753ed7d626d97eb2758ab018d13dab0fdf9a2667368e3153" exitCode=0 Oct 07 12:48:56 crc kubenswrapper[5024]: I1007 12:48:56.202504 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hdf5v" event={"ID":"4130a45e-dbcf-40d2-bfe9-b353bff57d17","Type":"ContainerDied","Data":"9568d1c4a3bc350b753ed7d626d97eb2758ab018d13dab0fdf9a2667368e3153"} Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.556600 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.663332 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl2h7\" (UniqueName: \"kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7\") pod \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.663426 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data\") pod \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.663513 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle\") pod \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.663534 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts\") pod \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\" (UID: \"4130a45e-dbcf-40d2-bfe9-b353bff57d17\") " Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.668913 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7" (OuterVolumeSpecName: "kube-api-access-pl2h7") pod "4130a45e-dbcf-40d2-bfe9-b353bff57d17" (UID: "4130a45e-dbcf-40d2-bfe9-b353bff57d17"). InnerVolumeSpecName "kube-api-access-pl2h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.669417 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts" (OuterVolumeSpecName: "scripts") pod "4130a45e-dbcf-40d2-bfe9-b353bff57d17" (UID: "4130a45e-dbcf-40d2-bfe9-b353bff57d17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.693624 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data" (OuterVolumeSpecName: "config-data") pod "4130a45e-dbcf-40d2-bfe9-b353bff57d17" (UID: "4130a45e-dbcf-40d2-bfe9-b353bff57d17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.697382 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4130a45e-dbcf-40d2-bfe9-b353bff57d17" (UID: "4130a45e-dbcf-40d2-bfe9-b353bff57d17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.765663 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.765705 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.765718 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl2h7\" (UniqueName: \"kubernetes.io/projected/4130a45e-dbcf-40d2-bfe9-b353bff57d17-kube-api-access-pl2h7\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:57 crc kubenswrapper[5024]: I1007 12:48:57.765733 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4130a45e-dbcf-40d2-bfe9-b353bff57d17-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.223887 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hdf5v" event={"ID":"4130a45e-dbcf-40d2-bfe9-b353bff57d17","Type":"ContainerDied","Data":"1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab"} Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.224309 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1571a6bb89b42cbae9b7a7ef6a2aab40d6f5e7503a59ffe053f4ad91765fa9ab" Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.223942 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hdf5v" Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.418660 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.419060 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-log" containerID="cri-o://ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" gracePeriod=30 Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.419101 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-api" containerID="cri-o://789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" gracePeriod=30 Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.442217 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.442546 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6eec8c08-d884-41c0-b9da-af69d056c96a" containerName="nova-scheduler-scheduler" containerID="cri-o://02c9080c4f6e269679bed641e973d244a3dd0ee4d532f6d0a308638b7aaeeb99" gracePeriod=30 Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.450868 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.451166 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" containerID="cri-o://eca43c671e1b9b2e9f7cdeaaffc0c105ba429f871bedbdffa310f93e73d79822" gracePeriod=30 Oct 07 12:48:58 crc kubenswrapper[5024]: I1007 12:48:58.451348 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" containerID="cri-o://6be90aea6ac8a8bde144dace9ec4f3b15791e4829002170b796023f2d6e26d26" gracePeriod=30 Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.101482 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190156 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s74gn\" (UniqueName: \"kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190239 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190288 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190399 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190417 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.190457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs\") pod \"1133f085-feec-4ed5-9566-d7fe1f19f257\" (UID: \"1133f085-feec-4ed5-9566-d7fe1f19f257\") " Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.191370 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs" (OuterVolumeSpecName: "logs") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.200656 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn" (OuterVolumeSpecName: "kube-api-access-s74gn") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "kube-api-access-s74gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.224214 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data" (OuterVolumeSpecName: "config-data") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233322 5024 generic.go:334] "Generic (PLEG): container finished" podID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerID="789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" exitCode=0 Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233360 5024 generic.go:334] "Generic (PLEG): container finished" podID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerID="ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" exitCode=143 Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233374 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233387 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerDied","Data":"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c"} Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233483 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerDied","Data":"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33"} Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233498 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1133f085-feec-4ed5-9566-d7fe1f19f257","Type":"ContainerDied","Data":"0f060103f4810555563e0a3946e16545bdad1da68122b7fecbc1017dfde7d98c"} Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.233519 5024 scope.go:117] "RemoveContainer" containerID="789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.236627 5024 generic.go:334] "Generic (PLEG): container finished" podID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerID="eca43c671e1b9b2e9f7cdeaaffc0c105ba429f871bedbdffa310f93e73d79822" exitCode=143 Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.236691 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerDied","Data":"eca43c671e1b9b2e9f7cdeaaffc0c105ba429f871bedbdffa310f93e73d79822"} Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.247436 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.250952 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.254615 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1133f085-feec-4ed5-9566-d7fe1f19f257" (UID: "1133f085-feec-4ed5-9566-d7fe1f19f257"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.263728 5024 scope.go:117] "RemoveContainer" containerID="ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292094 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292156 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292172 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292184 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s74gn\" (UniqueName: \"kubernetes.io/projected/1133f085-feec-4ed5-9566-d7fe1f19f257-kube-api-access-s74gn\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292197 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1133f085-feec-4ed5-9566-d7fe1f19f257-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.292206 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1133f085-feec-4ed5-9566-d7fe1f19f257-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.300021 5024 scope.go:117] "RemoveContainer" containerID="789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.300733 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c\": container with ID starting with 789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c not found: ID does not exist" containerID="789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.300764 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c"} err="failed to get container status \"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c\": rpc error: code = NotFound desc = could not find container \"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c\": container with ID starting with 789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c not found: ID does not exist" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.300785 5024 scope.go:117] "RemoveContainer" containerID="ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.301634 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33\": container with ID starting with ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33 not found: ID does not exist" containerID="ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.301665 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33"} err="failed to get container status \"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33\": rpc error: code = NotFound desc = could not find container \"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33\": container with ID starting with ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33 not found: ID does not exist" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.301683 5024 scope.go:117] "RemoveContainer" containerID="789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.302275 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c"} err="failed to get container status \"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c\": rpc error: code = NotFound desc = could not find container \"789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c\": container with ID starting with 789a6f5a9f4ae7bc41de6500ede309b682ad4c0ec3e9bde0c9557c8911b03a9c not found: ID does not exist" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.302301 5024 scope.go:117] "RemoveContainer" containerID="ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.302818 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33"} err="failed to get container status \"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33\": rpc error: code = NotFound desc = could not find container \"ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33\": container with ID starting with ff4583e27259584ff9e96e60390f500ccdbf1e799f48e2d4aae93d5bdafe2b33 not found: ID does not exist" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.635449 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.646113 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658255 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.658803 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="init" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658833 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="init" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.658871 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="dnsmasq-dns" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658884 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="dnsmasq-dns" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.658911 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-api" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658923 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-api" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.658938 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4130a45e-dbcf-40d2-bfe9-b353bff57d17" containerName="nova-manage" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658950 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4130a45e-dbcf-40d2-bfe9-b353bff57d17" containerName="nova-manage" Oct 07 12:48:59 crc kubenswrapper[5024]: E1007 12:48:59.658964 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-log" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.658975 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-log" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.659317 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-log" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.659362 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" containerName="nova-api-api" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.659384 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f545749d-e342-4e17-85b9-23f17ace4fdf" containerName="dnsmasq-dns" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.659398 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4130a45e-dbcf-40d2-bfe9-b353bff57d17" containerName="nova-manage" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.662315 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.664678 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.665026 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.665191 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.668200 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.804958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.805066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.805109 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63c03fb-b295-4ba6-b385-842aebd5147f-logs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.805172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-config-data\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.805213 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxst\" (UniqueName: \"kubernetes.io/projected/f63c03fb-b295-4ba6-b385-842aebd5147f-kube-api-access-rwxst\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.805436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-public-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.908459 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.908587 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63c03fb-b295-4ba6-b385-842aebd5147f-logs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.908684 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-config-data\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.908834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxst\" (UniqueName: \"kubernetes.io/projected/f63c03fb-b295-4ba6-b385-842aebd5147f-kube-api-access-rwxst\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.908935 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-public-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.909038 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.910742 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63c03fb-b295-4ba6-b385-842aebd5147f-logs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.912549 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-public-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.913404 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-config-data\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.922549 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.927354 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f63c03fb-b295-4ba6-b385-842aebd5147f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.932229 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxst\" (UniqueName: \"kubernetes.io/projected/f63c03fb-b295-4ba6-b385-842aebd5147f-kube-api-access-rwxst\") pod \"nova-api-0\" (UID: \"f63c03fb-b295-4ba6-b385-842aebd5147f\") " pod="openstack/nova-api-0" Oct 07 12:48:59 crc kubenswrapper[5024]: I1007 12:48:59.986170 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.247015 5024 generic.go:334] "Generic (PLEG): container finished" podID="6eec8c08-d884-41c0-b9da-af69d056c96a" containerID="02c9080c4f6e269679bed641e973d244a3dd0ee4d532f6d0a308638b7aaeeb99" exitCode=0 Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.247308 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6eec8c08-d884-41c0-b9da-af69d056c96a","Type":"ContainerDied","Data":"02c9080c4f6e269679bed641e973d244a3dd0ee4d532f6d0a308638b7aaeeb99"} Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.395871 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.458653 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.533090 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data\") pod \"6eec8c08-d884-41c0-b9da-af69d056c96a\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.533179 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr\") pod \"6eec8c08-d884-41c0-b9da-af69d056c96a\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.533215 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") pod \"6eec8c08-d884-41c0-b9da-af69d056c96a\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.539892 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr" (OuterVolumeSpecName: "kube-api-access-fwlsr") pod "6eec8c08-d884-41c0-b9da-af69d056c96a" (UID: "6eec8c08-d884-41c0-b9da-af69d056c96a"). InnerVolumeSpecName "kube-api-access-fwlsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:00 crc kubenswrapper[5024]: E1007 12:49:00.565929 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle podName:6eec8c08-d884-41c0-b9da-af69d056c96a nodeName:}" failed. No retries permitted until 2025-10-07 12:49:01.065896755 +0000 UTC m=+1279.141683593 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle") pod "6eec8c08-d884-41c0-b9da-af69d056c96a" (UID: "6eec8c08-d884-41c0-b9da-af69d056c96a") : error deleting /var/lib/kubelet/pods/6eec8c08-d884-41c0-b9da-af69d056c96a/volume-subpaths: remove /var/lib/kubelet/pods/6eec8c08-d884-41c0-b9da-af69d056c96a/volume-subpaths: no such file or directory Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.571363 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data" (OuterVolumeSpecName: "config-data") pod "6eec8c08-d884-41c0-b9da-af69d056c96a" (UID: "6eec8c08-d884-41c0-b9da-af69d056c96a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.635034 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.635068 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwlsr\" (UniqueName: \"kubernetes.io/projected/6eec8c08-d884-41c0-b9da-af69d056c96a-kube-api-access-fwlsr\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:00 crc kubenswrapper[5024]: I1007 12:49:00.771856 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1133f085-feec-4ed5-9566-d7fe1f19f257" path="/var/lib/kubelet/pods/1133f085-feec-4ed5-9566-d7fe1f19f257/volumes" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.144127 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") pod \"6eec8c08-d884-41c0-b9da-af69d056c96a\" (UID: \"6eec8c08-d884-41c0-b9da-af69d056c96a\") " Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.147399 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eec8c08-d884-41c0-b9da-af69d056c96a" (UID: "6eec8c08-d884-41c0-b9da-af69d056c96a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.246278 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec8c08-d884-41c0-b9da-af69d056c96a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.259855 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f63c03fb-b295-4ba6-b385-842aebd5147f","Type":"ContainerStarted","Data":"626fb23df8b19d64329080ef626559daf7850da0a5419655a296e5cd67a719c9"} Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.259911 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f63c03fb-b295-4ba6-b385-842aebd5147f","Type":"ContainerStarted","Data":"a4a525c7f47323f1bf42a2fec2d78c1ee5f391cf522f5c09d2df325d488d028f"} Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.259922 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f63c03fb-b295-4ba6-b385-842aebd5147f","Type":"ContainerStarted","Data":"784ccae38d8ddcf0e7c2f48283773a5f6726ba650f543c9a2cd9822ee5dc69bf"} Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.263887 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6eec8c08-d884-41c0-b9da-af69d056c96a","Type":"ContainerDied","Data":"ff0bfed6059a7ed0323fc1fa5cd963372c00934394c122bc3d2c02f31b049532"} Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.263926 5024 scope.go:117] "RemoveContainer" containerID="02c9080c4f6e269679bed641e973d244a3dd0ee4d532f6d0a308638b7aaeeb99" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.264006 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.283068 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.283040634 podStartE2EDuration="2.283040634s" podCreationTimestamp="2025-10-07 12:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:49:01.277338487 +0000 UTC m=+1279.353125325" watchObservedRunningTime="2025-10-07 12:49:01.283040634 +0000 UTC m=+1279.358827472" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.307213 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.314858 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.324249 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:49:01 crc kubenswrapper[5024]: E1007 12:49:01.324864 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eec8c08-d884-41c0-b9da-af69d056c96a" containerName="nova-scheduler-scheduler" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.324969 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eec8c08-d884-41c0-b9da-af69d056c96a" containerName="nova-scheduler-scheduler" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.325273 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eec8c08-d884-41c0-b9da-af69d056c96a" containerName="nova-scheduler-scheduler" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.325924 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.328595 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.330845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.451833 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.452198 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2bb\" (UniqueName: \"kubernetes.io/projected/d6967c30-2237-4e21-93c5-ae456e0383d6-kube-api-access-mz2bb\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.452960 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.554661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.554781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.554865 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz2bb\" (UniqueName: \"kubernetes.io/projected/d6967c30-2237-4e21-93c5-ae456e0383d6-kube-api-access-mz2bb\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.560166 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.560947 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6967c30-2237-4e21-93c5-ae456e0383d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.573072 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz2bb\" (UniqueName: \"kubernetes.io/projected/d6967c30-2237-4e21-93c5-ae456e0383d6-kube-api-access-mz2bb\") pod \"nova-scheduler-0\" (UID: \"d6967c30-2237-4e21-93c5-ae456e0383d6\") " pod="openstack/nova-scheduler-0" Oct 07 12:49:01 crc kubenswrapper[5024]: I1007 12:49:01.645348 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.080649 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.092371 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.092380 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.276885 5024 generic.go:334] "Generic (PLEG): container finished" podID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerID="6be90aea6ac8a8bde144dace9ec4f3b15791e4829002170b796023f2d6e26d26" exitCode=0 Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.276956 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerDied","Data":"6be90aea6ac8a8bde144dace9ec4f3b15791e4829002170b796023f2d6e26d26"} Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.278645 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6967c30-2237-4e21-93c5-ae456e0383d6","Type":"ContainerStarted","Data":"ab90fdb78b5f53242d15a5a77266ca341fb88862613daf946c374ceab8b3b4f4"} Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.278672 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6967c30-2237-4e21-93c5-ae456e0383d6","Type":"ContainerStarted","Data":"1b00c69340fac61ef6d7b491ac691de49b2dc80859f9fd1eae060b2177085ae2"} Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.301783 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.301763365 podStartE2EDuration="1.301763365s" podCreationTimestamp="2025-10-07 12:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:49:02.296737498 +0000 UTC m=+1280.372524346" watchObservedRunningTime="2025-10-07 12:49:02.301763365 +0000 UTC m=+1280.377550203" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.470964 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.570491 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs\") pod \"d0a23fee-3395-44fa-9ce5-71a1530a2910\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.570647 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data\") pod \"d0a23fee-3395-44fa-9ce5-71a1530a2910\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.570683 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle\") pod \"d0a23fee-3395-44fa-9ce5-71a1530a2910\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.570718 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cgwn\" (UniqueName: \"kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn\") pod \"d0a23fee-3395-44fa-9ce5-71a1530a2910\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.570748 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs\") pod \"d0a23fee-3395-44fa-9ce5-71a1530a2910\" (UID: \"d0a23fee-3395-44fa-9ce5-71a1530a2910\") " Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.571283 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs" (OuterVolumeSpecName: "logs") pod "d0a23fee-3395-44fa-9ce5-71a1530a2910" (UID: "d0a23fee-3395-44fa-9ce5-71a1530a2910"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.575432 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn" (OuterVolumeSpecName: "kube-api-access-2cgwn") pod "d0a23fee-3395-44fa-9ce5-71a1530a2910" (UID: "d0a23fee-3395-44fa-9ce5-71a1530a2910"). InnerVolumeSpecName "kube-api-access-2cgwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.599296 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0a23fee-3395-44fa-9ce5-71a1530a2910" (UID: "d0a23fee-3395-44fa-9ce5-71a1530a2910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.604298 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data" (OuterVolumeSpecName: "config-data") pod "d0a23fee-3395-44fa-9ce5-71a1530a2910" (UID: "d0a23fee-3395-44fa-9ce5-71a1530a2910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.628760 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d0a23fee-3395-44fa-9ce5-71a1530a2910" (UID: "d0a23fee-3395-44fa-9ce5-71a1530a2910"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.673346 5024 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.673390 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.673403 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a23fee-3395-44fa-9ce5-71a1530a2910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.673415 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cgwn\" (UniqueName: \"kubernetes.io/projected/d0a23fee-3395-44fa-9ce5-71a1530a2910-kube-api-access-2cgwn\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.673428 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a23fee-3395-44fa-9ce5-71a1530a2910-logs\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:02 crc kubenswrapper[5024]: I1007 12:49:02.761251 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eec8c08-d884-41c0-b9da-af69d056c96a" path="/var/lib/kubelet/pods/6eec8c08-d884-41c0-b9da-af69d056c96a/volumes" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.292847 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.292849 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0a23fee-3395-44fa-9ce5-71a1530a2910","Type":"ContainerDied","Data":"d7930be1dd79d86e487256f7755e68c413c558b54fd8a6ee8b41028b8bf4ed4a"} Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.304509 5024 scope.go:117] "RemoveContainer" containerID="6be90aea6ac8a8bde144dace9ec4f3b15791e4829002170b796023f2d6e26d26" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.336031 5024 scope.go:117] "RemoveContainer" containerID="eca43c671e1b9b2e9f7cdeaaffc0c105ba429f871bedbdffa310f93e73d79822" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.351852 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.368446 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.375656 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:49:03 crc kubenswrapper[5024]: E1007 12:49:03.376043 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.376060 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" Oct 07 12:49:03 crc kubenswrapper[5024]: E1007 12:49:03.376087 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.376093 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.376274 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-metadata" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.376303 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" containerName="nova-metadata-log" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.377198 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.381043 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.381320 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.391379 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.491551 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.491816 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8ss\" (UniqueName: \"kubernetes.io/projected/8f153807-258c-4fd9-a4d7-d16ab555b74f-kube-api-access-ms8ss\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.491950 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-config-data\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.492026 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.492103 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f153807-258c-4fd9-a4d7-d16ab555b74f-logs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593353 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593409 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8ss\" (UniqueName: \"kubernetes.io/projected/8f153807-258c-4fd9-a4d7-d16ab555b74f-kube-api-access-ms8ss\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593438 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-config-data\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593477 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f153807-258c-4fd9-a4d7-d16ab555b74f-logs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.593848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f153807-258c-4fd9-a4d7-d16ab555b74f-logs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.597315 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.599053 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.608964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f153807-258c-4fd9-a4d7-d16ab555b74f-config-data\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.609478 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8ss\" (UniqueName: \"kubernetes.io/projected/8f153807-258c-4fd9-a4d7-d16ab555b74f-kube-api-access-ms8ss\") pod \"nova-metadata-0\" (UID: \"8f153807-258c-4fd9-a4d7-d16ab555b74f\") " pod="openstack/nova-metadata-0" Oct 07 12:49:03 crc kubenswrapper[5024]: I1007 12:49:03.702706 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 07 12:49:04 crc kubenswrapper[5024]: I1007 12:49:04.145373 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 07 12:49:04 crc kubenswrapper[5024]: W1007 12:49:04.157892 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f153807_258c_4fd9_a4d7_d16ab555b74f.slice/crio-f6796eea30438ad6a43c41137f9c95415853c7ace220a7e5a17b6dbd16b6eb41 WatchSource:0}: Error finding container f6796eea30438ad6a43c41137f9c95415853c7ace220a7e5a17b6dbd16b6eb41: Status 404 returned error can't find the container with id f6796eea30438ad6a43c41137f9c95415853c7ace220a7e5a17b6dbd16b6eb41 Oct 07 12:49:04 crc kubenswrapper[5024]: I1007 12:49:04.306815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f153807-258c-4fd9-a4d7-d16ab555b74f","Type":"ContainerStarted","Data":"f6796eea30438ad6a43c41137f9c95415853c7ace220a7e5a17b6dbd16b6eb41"} Oct 07 12:49:04 crc kubenswrapper[5024]: I1007 12:49:04.764050 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a23fee-3395-44fa-9ce5-71a1530a2910" path="/var/lib/kubelet/pods/d0a23fee-3395-44fa-9ce5-71a1530a2910/volumes" Oct 07 12:49:05 crc kubenswrapper[5024]: I1007 12:49:05.318418 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f153807-258c-4fd9-a4d7-d16ab555b74f","Type":"ContainerStarted","Data":"64553c2f11f89e7f0e3d0ecf51f5ff126d15d9edcafc89673a1c620c9282c3ca"} Oct 07 12:49:05 crc kubenswrapper[5024]: I1007 12:49:05.318640 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f153807-258c-4fd9-a4d7-d16ab555b74f","Type":"ContainerStarted","Data":"85dafeeaa1958412d373722cebabaabdd6568452e36e3e6e97b4978abbc83e53"} Oct 07 12:49:05 crc kubenswrapper[5024]: I1007 12:49:05.336811 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.33679026 podStartE2EDuration="2.33679026s" podCreationTimestamp="2025-10-07 12:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:49:05.334249346 +0000 UTC m=+1283.410036234" watchObservedRunningTime="2025-10-07 12:49:05.33679026 +0000 UTC m=+1283.412577108" Oct 07 12:49:06 crc kubenswrapper[5024]: I1007 12:49:06.646833 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 07 12:49:08 crc kubenswrapper[5024]: I1007 12:49:08.702901 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:49:08 crc kubenswrapper[5024]: I1007 12:49:08.703311 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 07 12:49:09 crc kubenswrapper[5024]: I1007 12:49:09.986729 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:49:09 crc kubenswrapper[5024]: I1007 12:49:09.987050 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 07 12:49:10 crc kubenswrapper[5024]: I1007 12:49:10.999322 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f63c03fb-b295-4ba6-b385-842aebd5147f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:49:10 crc kubenswrapper[5024]: I1007 12:49:10.999382 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f63c03fb-b295-4ba6-b385-842aebd5147f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:49:11 crc kubenswrapper[5024]: I1007 12:49:11.646552 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 07 12:49:11 crc kubenswrapper[5024]: I1007 12:49:11.674505 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 07 12:49:12 crc kubenswrapper[5024]: I1007 12:49:12.430098 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 07 12:49:13 crc kubenswrapper[5024]: I1007 12:49:13.703876 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 07 12:49:13 crc kubenswrapper[5024]: I1007 12:49:13.704254 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 07 12:49:14 crc kubenswrapper[5024]: I1007 12:49:14.718465 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f153807-258c-4fd9-a4d7-d16ab555b74f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:49:14 crc kubenswrapper[5024]: I1007 12:49:14.719362 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f153807-258c-4fd9-a4d7-d16ab555b74f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 07 12:49:18 crc kubenswrapper[5024]: I1007 12:49:18.469409 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 07 12:49:19 crc kubenswrapper[5024]: I1007 12:49:19.992567 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 07 12:49:19 crc kubenswrapper[5024]: I1007 12:49:19.993035 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 07 12:49:19 crc kubenswrapper[5024]: I1007 12:49:19.996094 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 07 12:49:20 crc kubenswrapper[5024]: I1007 12:49:20.000821 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 07 12:49:20 crc kubenswrapper[5024]: I1007 12:49:20.469705 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 07 12:49:20 crc kubenswrapper[5024]: I1007 12:49:20.476834 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 07 12:49:23 crc kubenswrapper[5024]: I1007 12:49:23.707909 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 07 12:49:23 crc kubenswrapper[5024]: I1007 12:49:23.709078 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 07 12:49:23 crc kubenswrapper[5024]: I1007 12:49:23.717654 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 07 12:49:24 crc kubenswrapper[5024]: I1007 12:49:24.507209 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 07 12:49:32 crc kubenswrapper[5024]: I1007 12:49:32.505661 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:33 crc kubenswrapper[5024]: I1007 12:49:33.605382 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:36 crc kubenswrapper[5024]: I1007 12:49:36.184869 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" containerID="cri-o://c0a0b0343fd64d8e25099cce68699244689d9756420b81adadf059d849991bce" gracePeriod=604797 Oct 07 12:49:37 crc kubenswrapper[5024]: I1007 12:49:37.267581 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="rabbitmq" containerID="cri-o://089816b26a36af7d0c2d6d18c92f22f1f583ffe9022d4ee2660443bbb0d8ce65" gracePeriod=604797 Oct 07 12:49:40 crc kubenswrapper[5024]: I1007 12:49:40.455760 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Oct 07 12:49:40 crc kubenswrapper[5024]: I1007 12:49:40.752734 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.667113 5024 generic.go:334] "Generic (PLEG): container finished" podID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerID="c0a0b0343fd64d8e25099cce68699244689d9756420b81adadf059d849991bce" exitCode=0 Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.667168 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerDied","Data":"c0a0b0343fd64d8e25099cce68699244689d9756420b81adadf059d849991bce"} Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.667431 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d2d61d4d-4921-4832-bb53-3ca3a70663cf","Type":"ContainerDied","Data":"05b944984f9a20d2dacb7d00eb2a7f3cc1b4c789e346030b65843277ad199c2f"} Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.667447 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b944984f9a20d2dacb7d00eb2a7f3cc1b4c789e346030b65843277ad199c2f" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.749189 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848237 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848610 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848665 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848699 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848853 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848874 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjdz\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848899 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.848974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.849033 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret\") pod \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\" (UID: \"d2d61d4d-4921-4832-bb53-3ca3a70663cf\") " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.850727 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.850762 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.854813 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.856071 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info" (OuterVolumeSpecName: "pod-info") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.859591 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz" (OuterVolumeSpecName: "kube-api-access-mgjdz") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "kube-api-access-mgjdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.859616 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.860352 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.864967 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.886454 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data" (OuterVolumeSpecName: "config-data") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.922243 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf" (OuterVolumeSpecName: "server-conf") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.950877 5024 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951172 5024 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2d61d4d-4921-4832-bb53-3ca3a70663cf-pod-info\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951260 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjdz\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-kube-api-access-mgjdz\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951351 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951427 5024 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-server-conf\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951511 5024 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2d61d4d-4921-4832-bb53-3ca3a70663cf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951595 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951719 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2d61d4d-4921-4832-bb53-3ca3a70663cf-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951802 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.951913 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.977801 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 07 12:49:42 crc kubenswrapper[5024]: I1007 12:49:42.983053 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d2d61d4d-4921-4832-bb53-3ca3a70663cf" (UID: "d2d61d4d-4921-4832-bb53-3ca3a70663cf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.060088 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2d61d4d-4921-4832-bb53-3ca3a70663cf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.060128 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.689046 5024 generic.go:334] "Generic (PLEG): container finished" podID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerID="089816b26a36af7d0c2d6d18c92f22f1f583ffe9022d4ee2660443bbb0d8ce65" exitCode=0 Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.689167 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.692266 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerDied","Data":"089816b26a36af7d0c2d6d18c92f22f1f583ffe9022d4ee2660443bbb0d8ce65"} Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.719470 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.726993 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.748928 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:43 crc kubenswrapper[5024]: E1007 12:49:43.749473 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="setup-container" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.749497 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="setup-container" Oct 07 12:49:43 crc kubenswrapper[5024]: E1007 12:49:43.749533 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.749542 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.749741 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" containerName="rabbitmq" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.755815 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.758782 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.758876 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.758972 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fdntn" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.759057 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.759103 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.759236 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.760067 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.763639 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.823150 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872791 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10cd989b-34e3-4e21-bb69-40115806b190-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-config-data\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872954 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.872978 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.873009 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.873039 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjm9\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-kube-api-access-nmjm9\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.873457 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10cd989b-34e3-4e21-bb69-40115806b190-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.873513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974585 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974614 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974698 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974719 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974772 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974818 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974864 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdkvx\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974915 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.974945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls\") pod \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\" (UID: \"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f\") " Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10cd989b-34e3-4e21-bb69-40115806b190-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975182 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975224 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975250 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10cd989b-34e3-4e21-bb69-40115806b190-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975274 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975316 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-config-data\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.975420 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjm9\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-kube-api-access-nmjm9\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.976321 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.976471 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.976540 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.977330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-config-data\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.977388 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.978242 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10cd989b-34e3-4e21-bb69-40115806b190-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.980194 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.980736 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.981012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.982998 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.984455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10cd989b-34e3-4e21-bb69-40115806b190-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.985410 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.988374 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info" (OuterVolumeSpecName: "pod-info") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.988856 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.989063 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.990019 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.993795 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx" (OuterVolumeSpecName: "kube-api-access-cdkvx") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "kube-api-access-cdkvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.995492 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10cd989b-34e3-4e21-bb69-40115806b190-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:43 crc kubenswrapper[5024]: I1007 12:49:43.996640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjm9\" (UniqueName: \"kubernetes.io/projected/10cd989b-34e3-4e21-bb69-40115806b190-kube-api-access-nmjm9\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.021719 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data" (OuterVolumeSpecName: "config-data") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.036577 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"10cd989b-34e3-4e21-bb69-40115806b190\") " pod="openstack/rabbitmq-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.066543 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf" (OuterVolumeSpecName: "server-conf") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.076969 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdkvx\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-kube-api-access-cdkvx\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077002 5024 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-pod-info\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077014 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077024 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077033 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077132 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077202 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077212 5024 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-server-conf\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077221 5024 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.077230 5024 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.098440 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.100487 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" (UID: "cb894b3e-4bf4-46b0-8e54-e4a17c02d13f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.137747 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.178672 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.178707 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.564657 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 07 12:49:44 crc kubenswrapper[5024]: W1007 12:49:44.576154 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10cd989b_34e3_4e21_bb69_40115806b190.slice/crio-c15f7498af600001bbded98eadc620dbb834ad0ce4e2fe7eb2a06f2825957758 WatchSource:0}: Error finding container c15f7498af600001bbded98eadc620dbb834ad0ce4e2fe7eb2a06f2825957758: Status 404 returned error can't find the container with id c15f7498af600001bbded98eadc620dbb834ad0ce4e2fe7eb2a06f2825957758 Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.701150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10cd989b-34e3-4e21-bb69-40115806b190","Type":"ContainerStarted","Data":"c15f7498af600001bbded98eadc620dbb834ad0ce4e2fe7eb2a06f2825957758"} Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.703832 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb894b3e-4bf4-46b0-8e54-e4a17c02d13f","Type":"ContainerDied","Data":"04a61398d2d0069ea1cc34914d9a59420a02d4659e3d0ae1fb1ccb4da4988b69"} Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.704155 5024 scope.go:117] "RemoveContainer" containerID="089816b26a36af7d0c2d6d18c92f22f1f583ffe9022d4ee2660443bbb0d8ce65" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.704391 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.745363 5024 scope.go:117] "RemoveContainer" containerID="69ff3a815cfcebe3b00f8e6f065090d3b8dd1b06444974727f4605fb76e990d5" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.749250 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.770532 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d61d4d-4921-4832-bb53-3ca3a70663cf" path="/var/lib/kubelet/pods/d2d61d4d-4921-4832-bb53-3ca3a70663cf/volumes" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.771708 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.787255 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:44 crc kubenswrapper[5024]: E1007 12:49:44.787752 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="rabbitmq" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.787774 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="rabbitmq" Oct 07 12:49:44 crc kubenswrapper[5024]: E1007 12:49:44.787799 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="setup-container" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.787807 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="setup-container" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.788003 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" containerName="rabbitmq" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.789172 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.800123 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.800611 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.800795 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.800867 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.801412 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.801703 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.801891 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cf5sn" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.820939 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.891289 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892063 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6533041-d509-4740-9ed5-06cdf97e7340-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892238 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892338 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892500 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892606 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6533041-d509-4740-9ed5-06cdf97e7340-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892707 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892812 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.892933 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.893019 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.893089 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpx2\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-kube-api-access-5xpx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995094 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6533041-d509-4740-9ed5-06cdf97e7340-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995244 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995274 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995329 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995372 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6533041-d509-4740-9ed5-06cdf97e7340-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995420 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995490 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995588 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995612 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpx2\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-kube-api-access-5xpx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.995669 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.996486 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.996639 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.996757 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.996938 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.997052 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:44 crc kubenswrapper[5024]: I1007 12:49:44.997398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6533041-d509-4740-9ed5-06cdf97e7340-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.000932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.001021 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.001553 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6533041-d509-4740-9ed5-06cdf97e7340-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.001582 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6533041-d509-4740-9ed5-06cdf97e7340-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.012428 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpx2\" (UniqueName: \"kubernetes.io/projected/a6533041-d509-4740-9ed5-06cdf97e7340-kube-api-access-5xpx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.022165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6533041-d509-4740-9ed5-06cdf97e7340\") " pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.147696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.560808 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 07 12:49:45 crc kubenswrapper[5024]: W1007 12:49:45.564955 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6533041_d509_4740_9ed5_06cdf97e7340.slice/crio-1d26c80da496cd401bd56ac2acde8a28b96461d254580179a7efd19b91a12f66 WatchSource:0}: Error finding container 1d26c80da496cd401bd56ac2acde8a28b96461d254580179a7efd19b91a12f66: Status 404 returned error can't find the container with id 1d26c80da496cd401bd56ac2acde8a28b96461d254580179a7efd19b91a12f66 Oct 07 12:49:45 crc kubenswrapper[5024]: I1007 12:49:45.722374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6533041-d509-4740-9ed5-06cdf97e7340","Type":"ContainerStarted","Data":"1d26c80da496cd401bd56ac2acde8a28b96461d254580179a7efd19b91a12f66"} Oct 07 12:49:46 crc kubenswrapper[5024]: I1007 12:49:46.732993 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10cd989b-34e3-4e21-bb69-40115806b190","Type":"ContainerStarted","Data":"221f207d1787e1145d2527807b90321eb68c2b45a0d2ea92eb14e5c519423693"} Oct 07 12:49:46 crc kubenswrapper[5024]: I1007 12:49:46.769007 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb894b3e-4bf4-46b0-8e54-e4a17c02d13f" path="/var/lib/kubelet/pods/cb894b3e-4bf4-46b0-8e54-e4a17c02d13f/volumes" Oct 07 12:49:47 crc kubenswrapper[5024]: I1007 12:49:47.742528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6533041-d509-4740-9ed5-06cdf97e7340","Type":"ContainerStarted","Data":"e1ab42abf464b7c504e4004bd143ae8b5bd207260eb6228335a3367a227ea2d0"} Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.197874 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.199617 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.201027 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.212962 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjvn\" (UniqueName: \"kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289784 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289867 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289886 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.289911 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391010 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391424 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjvn\" (UniqueName: \"kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391557 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391743 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391996 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.391928 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.392434 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.392671 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.392803 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.393544 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.415900 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjvn\" (UniqueName: \"kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn\") pod \"dnsmasq-dns-6447ccbd8f-49vq2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.517584 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:49 crc kubenswrapper[5024]: I1007 12:49:49.929845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:49:50 crc kubenswrapper[5024]: I1007 12:49:50.772307 5024 generic.go:334] "Generic (PLEG): container finished" podID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerID="43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba" exitCode=0 Oct 07 12:49:50 crc kubenswrapper[5024]: I1007 12:49:50.772662 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" event={"ID":"03ddcf1f-8a56-4b8a-9755-648366444dc2","Type":"ContainerDied","Data":"43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba"} Oct 07 12:49:50 crc kubenswrapper[5024]: I1007 12:49:50.772878 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" event={"ID":"03ddcf1f-8a56-4b8a-9755-648366444dc2","Type":"ContainerStarted","Data":"136dfae0ea4342f2f1761ea2ff37849288e22c4bf57ae8a42c005d65212e524e"} Oct 07 12:49:51 crc kubenswrapper[5024]: I1007 12:49:51.782824 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" event={"ID":"03ddcf1f-8a56-4b8a-9755-648366444dc2","Type":"ContainerStarted","Data":"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc"} Oct 07 12:49:51 crc kubenswrapper[5024]: I1007 12:49:51.783195 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:51 crc kubenswrapper[5024]: I1007 12:49:51.803395 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" podStartSLOduration=2.803368926 podStartE2EDuration="2.803368926s" podCreationTimestamp="2025-10-07 12:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:49:51.80009836 +0000 UTC m=+1329.875885248" watchObservedRunningTime="2025-10-07 12:49:51.803368926 +0000 UTC m=+1329.879155774" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.519268 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.618342 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.618645 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="dnsmasq-dns" containerID="cri-o://59edada1083546ba59e2f4f7892edd7cbef340c0566cf7f667f6255d691f6f77" gracePeriod=10 Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.766637 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.772168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.790472 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.865102 5024 generic.go:334] "Generic (PLEG): container finished" podID="6052e097-7c9e-4277-a920-72d8923dc001" containerID="59edada1083546ba59e2f4f7892edd7cbef340c0566cf7f667f6255d691f6f77" exitCode=0 Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.865169 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" event={"ID":"6052e097-7c9e-4277-a920-72d8923dc001","Type":"ContainerDied","Data":"59edada1083546ba59e2f4f7892edd7cbef340c0566cf7f667f6255d691f6f77"} Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906238 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906463 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qxw\" (UniqueName: \"kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906607 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906747 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906816 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:49:59 crc kubenswrapper[5024]: I1007 12:49:59.906879 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008478 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008529 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008596 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52qxw\" (UniqueName: \"kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.008669 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.009417 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.009472 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.009472 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.009786 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.010082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.031095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52qxw\" (UniqueName: \"kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw\") pod \"dnsmasq-dns-864d5fc68c-nd8ds\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.112921 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.116283 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.212061 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd98f\" (UniqueName: \"kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f\") pod \"6052e097-7c9e-4277-a920-72d8923dc001\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.212131 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc\") pod \"6052e097-7c9e-4277-a920-72d8923dc001\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.212235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb\") pod \"6052e097-7c9e-4277-a920-72d8923dc001\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.212354 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb\") pod \"6052e097-7c9e-4277-a920-72d8923dc001\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.212385 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config\") pod \"6052e097-7c9e-4277-a920-72d8923dc001\" (UID: \"6052e097-7c9e-4277-a920-72d8923dc001\") " Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.217396 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f" (OuterVolumeSpecName: "kube-api-access-qd98f") pod "6052e097-7c9e-4277-a920-72d8923dc001" (UID: "6052e097-7c9e-4277-a920-72d8923dc001"). InnerVolumeSpecName "kube-api-access-qd98f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.283314 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6052e097-7c9e-4277-a920-72d8923dc001" (UID: "6052e097-7c9e-4277-a920-72d8923dc001"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.290282 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6052e097-7c9e-4277-a920-72d8923dc001" (UID: "6052e097-7c9e-4277-a920-72d8923dc001"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.291564 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6052e097-7c9e-4277-a920-72d8923dc001" (UID: "6052e097-7c9e-4277-a920-72d8923dc001"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.298193 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config" (OuterVolumeSpecName: "config") pod "6052e097-7c9e-4277-a920-72d8923dc001" (UID: "6052e097-7c9e-4277-a920-72d8923dc001"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.314378 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd98f\" (UniqueName: \"kubernetes.io/projected/6052e097-7c9e-4277-a920-72d8923dc001-kube-api-access-qd98f\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.314403 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.314412 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.314420 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.314428 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6052e097-7c9e-4277-a920-72d8923dc001-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.628812 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 12:50:00 crc kubenswrapper[5024]: W1007 12:50:00.631388 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59a7592_dced_40e5_abb1_d85862ca5ac7.slice/crio-79f1d80a07e950eecb3122693e459acb7d8c468b4665521fbca059d0d0f78e41 WatchSource:0}: Error finding container 79f1d80a07e950eecb3122693e459acb7d8c468b4665521fbca059d0d0f78e41: Status 404 returned error can't find the container with id 79f1d80a07e950eecb3122693e459acb7d8c468b4665521fbca059d0d0f78e41 Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.875486 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" event={"ID":"e59a7592-dced-40e5-abb1-d85862ca5ac7","Type":"ContainerStarted","Data":"79f1d80a07e950eecb3122693e459acb7d8c468b4665521fbca059d0d0f78e41"} Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.877808 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" event={"ID":"6052e097-7c9e-4277-a920-72d8923dc001","Type":"ContainerDied","Data":"cdfc4f7f589e2c821dbe446c40f36218121c22f183ad688c2f1882ef50ed5183"} Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.877887 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-vb5v7" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.878021 5024 scope.go:117] "RemoveContainer" containerID="59edada1083546ba59e2f4f7892edd7cbef340c0566cf7f667f6255d691f6f77" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.903409 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.903925 5024 scope.go:117] "RemoveContainer" containerID="2a55f44791aeb6318214eed5416723e94fe3ed798bac3887fc83f52740535c9c" Oct 07 12:50:00 crc kubenswrapper[5024]: I1007 12:50:00.915638 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-vb5v7"] Oct 07 12:50:01 crc kubenswrapper[5024]: I1007 12:50:01.891001 5024 generic.go:334] "Generic (PLEG): container finished" podID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerID="2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b" exitCode=0 Oct 07 12:50:01 crc kubenswrapper[5024]: I1007 12:50:01.891083 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" event={"ID":"e59a7592-dced-40e5-abb1-d85862ca5ac7","Type":"ContainerDied","Data":"2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b"} Oct 07 12:50:02 crc kubenswrapper[5024]: I1007 12:50:02.762629 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6052e097-7c9e-4277-a920-72d8923dc001" path="/var/lib/kubelet/pods/6052e097-7c9e-4277-a920-72d8923dc001/volumes" Oct 07 12:50:02 crc kubenswrapper[5024]: I1007 12:50:02.904207 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" event={"ID":"e59a7592-dced-40e5-abb1-d85862ca5ac7","Type":"ContainerStarted","Data":"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749"} Oct 07 12:50:02 crc kubenswrapper[5024]: I1007 12:50:02.904369 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:02 crc kubenswrapper[5024]: I1007 12:50:02.921848 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" podStartSLOduration=3.921826114 podStartE2EDuration="3.921826114s" podCreationTimestamp="2025-10-07 12:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:50:02.918905978 +0000 UTC m=+1340.994692836" watchObservedRunningTime="2025-10-07 12:50:02.921826114 +0000 UTC m=+1340.997612962" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.115489 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.213904 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.214569 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="dnsmasq-dns" containerID="cri-o://975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc" gracePeriod=10 Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.684779 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750224 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750350 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750393 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750410 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750596 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tjvn\" (UniqueName: \"kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.750642 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config\") pod \"03ddcf1f-8a56-4b8a-9755-648366444dc2\" (UID: \"03ddcf1f-8a56-4b8a-9755-648366444dc2\") " Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.757311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn" (OuterVolumeSpecName: "kube-api-access-2tjvn") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "kube-api-access-2tjvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.798569 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.805479 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.806420 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.806668 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config" (OuterVolumeSpecName: "config") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.809147 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03ddcf1f-8a56-4b8a-9755-648366444dc2" (UID: "03ddcf1f-8a56-4b8a-9755-648366444dc2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853749 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tjvn\" (UniqueName: \"kubernetes.io/projected/03ddcf1f-8a56-4b8a-9755-648366444dc2-kube-api-access-2tjvn\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853785 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-config\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853798 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853810 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853821 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.853831 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03ddcf1f-8a56-4b8a-9755-648366444dc2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.989202 5024 generic.go:334] "Generic (PLEG): container finished" podID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerID="975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc" exitCode=0 Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.989274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" event={"ID":"03ddcf1f-8a56-4b8a-9755-648366444dc2","Type":"ContainerDied","Data":"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc"} Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.989309 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.989339 5024 scope.go:117] "RemoveContainer" containerID="975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc" Oct 07 12:50:10 crc kubenswrapper[5024]: I1007 12:50:10.989319 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-49vq2" event={"ID":"03ddcf1f-8a56-4b8a-9755-648366444dc2","Type":"ContainerDied","Data":"136dfae0ea4342f2f1761ea2ff37849288e22c4bf57ae8a42c005d65212e524e"} Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.009774 5024 scope.go:117] "RemoveContainer" containerID="43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba" Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.029472 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.035401 5024 scope.go:117] "RemoveContainer" containerID="975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc" Oct 07 12:50:11 crc kubenswrapper[5024]: E1007 12:50:11.036040 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc\": container with ID starting with 975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc not found: ID does not exist" containerID="975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc" Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.036121 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc"} err="failed to get container status \"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc\": rpc error: code = NotFound desc = could not find container \"975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc\": container with ID starting with 975698835be1c2e71a5659dac1a4c6d5884d49bc1273b323ed9485e96d015ffc not found: ID does not exist" Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.036180 5024 scope.go:117] "RemoveContainer" containerID="43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba" Oct 07 12:50:11 crc kubenswrapper[5024]: E1007 12:50:11.036675 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba\": container with ID starting with 43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba not found: ID does not exist" containerID="43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba" Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.036708 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba"} err="failed to get container status \"43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba\": rpc error: code = NotFound desc = could not find container \"43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba\": container with ID starting with 43c4471216f63b7022fca5b0150505a9c82c7a88ba25bc95c8ce6ada73741aba not found: ID does not exist" Oct 07 12:50:11 crc kubenswrapper[5024]: I1007 12:50:11.036902 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-49vq2"] Oct 07 12:50:12 crc kubenswrapper[5024]: I1007 12:50:12.762978 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" path="/var/lib/kubelet/pods/03ddcf1f-8a56-4b8a-9755-648366444dc2/volumes" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.851831 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv"] Oct 07 12:50:15 crc kubenswrapper[5024]: E1007 12:50:15.852609 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="init" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852624 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="init" Oct 07 12:50:15 crc kubenswrapper[5024]: E1007 12:50:15.852636 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852646 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: E1007 12:50:15.852668 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852676 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: E1007 12:50:15.852703 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="init" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852711 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="init" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852906 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6052e097-7c9e-4277-a920-72d8923dc001" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.852928 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ddcf1f-8a56-4b8a-9755-648366444dc2" containerName="dnsmasq-dns" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.853636 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.856293 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.856914 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.857201 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.857956 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.867102 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv"] Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.941716 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.942104 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.942373 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-597jh\" (UniqueName: \"kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:15 crc kubenswrapper[5024]: I1007 12:50:15.942426 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.044420 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.044502 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.044632 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.044657 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-597jh\" (UniqueName: \"kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.052397 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.052397 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.052849 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.064294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-597jh\" (UniqueName: \"kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.209282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.732388 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv"] Oct 07 12:50:16 crc kubenswrapper[5024]: I1007 12:50:16.735327 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 12:50:17 crc kubenswrapper[5024]: I1007 12:50:17.076196 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" event={"ID":"b9b6d878-915c-4356-bce8-14013e435c92","Type":"ContainerStarted","Data":"fd9c9701a9d5ed75d5b39a91d8b91828707f3c9ccc30e43957b2b991e3d94a13"} Oct 07 12:50:19 crc kubenswrapper[5024]: I1007 12:50:19.095620 5024 generic.go:334] "Generic (PLEG): container finished" podID="10cd989b-34e3-4e21-bb69-40115806b190" containerID="221f207d1787e1145d2527807b90321eb68c2b45a0d2ea92eb14e5c519423693" exitCode=0 Oct 07 12:50:19 crc kubenswrapper[5024]: I1007 12:50:19.095695 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10cd989b-34e3-4e21-bb69-40115806b190","Type":"ContainerDied","Data":"221f207d1787e1145d2527807b90321eb68c2b45a0d2ea92eb14e5c519423693"} Oct 07 12:50:20 crc kubenswrapper[5024]: I1007 12:50:20.107890 5024 generic.go:334] "Generic (PLEG): container finished" podID="a6533041-d509-4740-9ed5-06cdf97e7340" containerID="e1ab42abf464b7c504e4004bd143ae8b5bd207260eb6228335a3367a227ea2d0" exitCode=0 Oct 07 12:50:20 crc kubenswrapper[5024]: I1007 12:50:20.107986 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6533041-d509-4740-9ed5-06cdf97e7340","Type":"ContainerDied","Data":"e1ab42abf464b7c504e4004bd143ae8b5bd207260eb6228335a3367a227ea2d0"} Oct 07 12:50:20 crc kubenswrapper[5024]: I1007 12:50:20.115777 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10cd989b-34e3-4e21-bb69-40115806b190","Type":"ContainerStarted","Data":"3e2ff242cb737755d08285e4906c6c2cb0933e47db7c028bae5aaa4ea63141f4"} Oct 07 12:50:20 crc kubenswrapper[5024]: I1007 12:50:20.156360 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.156328582 podStartE2EDuration="37.156328582s" podCreationTimestamp="2025-10-07 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:50:20.149891454 +0000 UTC m=+1358.225678292" watchObservedRunningTime="2025-10-07 12:50:20.156328582 +0000 UTC m=+1358.232115420" Oct 07 12:50:24 crc kubenswrapper[5024]: I1007 12:50:24.138750 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 07 12:50:26 crc kubenswrapper[5024]: I1007 12:50:26.183030 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" event={"ID":"b9b6d878-915c-4356-bce8-14013e435c92","Type":"ContainerStarted","Data":"90a85f7e26de6dba4c9a70e33c338d3d4449b49b134bc94874fefd9d19c4cbef"} Oct 07 12:50:26 crc kubenswrapper[5024]: I1007 12:50:26.196547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6533041-d509-4740-9ed5-06cdf97e7340","Type":"ContainerStarted","Data":"f174d38219a9c7744cf065a444928d854503916f61eb39072405465a8a6ff5b8"} Oct 07 12:50:26 crc kubenswrapper[5024]: I1007 12:50:26.197679 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:50:26 crc kubenswrapper[5024]: I1007 12:50:26.218755 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" podStartSLOduration=2.024474555 podStartE2EDuration="11.218733034s" podCreationTimestamp="2025-10-07 12:50:15 +0000 UTC" firstStartedPulling="2025-10-07 12:50:16.735105833 +0000 UTC m=+1354.810892671" lastFinishedPulling="2025-10-07 12:50:25.929364312 +0000 UTC m=+1364.005151150" observedRunningTime="2025-10-07 12:50:26.208663459 +0000 UTC m=+1364.284450297" watchObservedRunningTime="2025-10-07 12:50:26.218733034 +0000 UTC m=+1364.294519872" Oct 07 12:50:26 crc kubenswrapper[5024]: I1007 12:50:26.235649 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.23563002 podStartE2EDuration="42.23563002s" podCreationTimestamp="2025-10-07 12:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 12:50:26.230563841 +0000 UTC m=+1364.306350689" watchObservedRunningTime="2025-10-07 12:50:26.23563002 +0000 UTC m=+1364.311416858" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.460470 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.463370 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.489272 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.557011 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.557075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.557239 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9lkh\" (UniqueName: \"kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.659450 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9lkh\" (UniqueName: \"kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.659598 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.659632 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.660110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.660218 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.679087 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9lkh\" (UniqueName: \"kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh\") pod \"redhat-operators-mgcfx\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:31 crc kubenswrapper[5024]: I1007 12:50:31.784370 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:32 crc kubenswrapper[5024]: I1007 12:50:32.244800 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:50:32 crc kubenswrapper[5024]: I1007 12:50:32.268863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerStarted","Data":"034192792b7ca97428c6f466b5d8d22ac9a2e7ccd1104d70844ce96fb2db65e4"} Oct 07 12:50:33 crc kubenswrapper[5024]: I1007 12:50:33.279220 5024 generic.go:334] "Generic (PLEG): container finished" podID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerID="819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5" exitCode=0 Oct 07 12:50:33 crc kubenswrapper[5024]: I1007 12:50:33.279272 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerDied","Data":"819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5"} Oct 07 12:50:34 crc kubenswrapper[5024]: I1007 12:50:34.141339 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 07 12:50:35 crc kubenswrapper[5024]: I1007 12:50:35.304982 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerStarted","Data":"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9"} Oct 07 12:50:39 crc kubenswrapper[5024]: I1007 12:50:39.342428 5024 generic.go:334] "Generic (PLEG): container finished" podID="b9b6d878-915c-4356-bce8-14013e435c92" containerID="90a85f7e26de6dba4c9a70e33c338d3d4449b49b134bc94874fefd9d19c4cbef" exitCode=0 Oct 07 12:50:39 crc kubenswrapper[5024]: I1007 12:50:39.342513 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" event={"ID":"b9b6d878-915c-4356-bce8-14013e435c92","Type":"ContainerDied","Data":"90a85f7e26de6dba4c9a70e33c338d3d4449b49b134bc94874fefd9d19c4cbef"} Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.760262 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.830035 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-597jh\" (UniqueName: \"kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh\") pod \"b9b6d878-915c-4356-bce8-14013e435c92\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.830254 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key\") pod \"b9b6d878-915c-4356-bce8-14013e435c92\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.830282 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle\") pod \"b9b6d878-915c-4356-bce8-14013e435c92\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.830336 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory\") pod \"b9b6d878-915c-4356-bce8-14013e435c92\" (UID: \"b9b6d878-915c-4356-bce8-14013e435c92\") " Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.836108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh" (OuterVolumeSpecName: "kube-api-access-597jh") pod "b9b6d878-915c-4356-bce8-14013e435c92" (UID: "b9b6d878-915c-4356-bce8-14013e435c92"). InnerVolumeSpecName "kube-api-access-597jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.836540 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b9b6d878-915c-4356-bce8-14013e435c92" (UID: "b9b6d878-915c-4356-bce8-14013e435c92"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.856354 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9b6d878-915c-4356-bce8-14013e435c92" (UID: "b9b6d878-915c-4356-bce8-14013e435c92"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.857381 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory" (OuterVolumeSpecName: "inventory") pod "b9b6d878-915c-4356-bce8-14013e435c92" (UID: "b9b6d878-915c-4356-bce8-14013e435c92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.934511 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-597jh\" (UniqueName: \"kubernetes.io/projected/b9b6d878-915c-4356-bce8-14013e435c92-kube-api-access-597jh\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.934833 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.934925 5024 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:40 crc kubenswrapper[5024]: I1007 12:50:40.935014 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9b6d878-915c-4356-bce8-14013e435c92-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.362481 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" event={"ID":"b9b6d878-915c-4356-bce8-14013e435c92","Type":"ContainerDied","Data":"fd9c9701a9d5ed75d5b39a91d8b91828707f3c9ccc30e43957b2b991e3d94a13"} Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.362524 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9c9701a9d5ed75d5b39a91d8b91828707f3c9ccc30e43957b2b991e3d94a13" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.362609 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.433349 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8"] Oct 07 12:50:41 crc kubenswrapper[5024]: E1007 12:50:41.433735 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b6d878-915c-4356-bce8-14013e435c92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.433753 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b6d878-915c-4356-bce8-14013e435c92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.433925 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b6d878-915c-4356-bce8-14013e435c92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.434528 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.438100 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.438237 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.438416 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.440166 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.441271 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8"] Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.551282 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.551350 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.551400 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.551453 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqf9\" (UniqueName: \"kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.653616 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.653708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.653790 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.653867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqf9\" (UniqueName: \"kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.657968 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.658833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.659799 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.678498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqf9\" (UniqueName: \"kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:41 crc kubenswrapper[5024]: I1007 12:50:41.758584 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:50:42 crc kubenswrapper[5024]: I1007 12:50:42.250171 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8"] Oct 07 12:50:42 crc kubenswrapper[5024]: I1007 12:50:42.371684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" event={"ID":"9f04d138-3e0f-47a6-8bb5-3488ec712d2d","Type":"ContainerStarted","Data":"fc34c7d215071c4717c3daab5c8e900e67c72aaf2dd1b8910b9d8b38c505e25a"} Oct 07 12:50:42 crc kubenswrapper[5024]: I1007 12:50:42.704892 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.380871 5024 generic.go:334] "Generic (PLEG): container finished" podID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerID="6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9" exitCode=0 Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.380941 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerDied","Data":"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9"} Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.386004 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" event={"ID":"9f04d138-3e0f-47a6-8bb5-3488ec712d2d","Type":"ContainerStarted","Data":"7184743f32f7e166451ff044b3658ff713a1db62f33ee9ce896760c3faeb0b57"} Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.432058 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" podStartSLOduration=1.9878363270000001 podStartE2EDuration="2.432040598s" podCreationTimestamp="2025-10-07 12:50:41 +0000 UTC" firstStartedPulling="2025-10-07 12:50:42.257450569 +0000 UTC m=+1380.333237407" lastFinishedPulling="2025-10-07 12:50:42.70165484 +0000 UTC m=+1380.777441678" observedRunningTime="2025-10-07 12:50:43.423357984 +0000 UTC m=+1381.499144842" watchObservedRunningTime="2025-10-07 12:50:43.432040598 +0000 UTC m=+1381.507827436" Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.720024 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:50:43 crc kubenswrapper[5024]: I1007 12:50:43.720629 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:50:44 crc kubenswrapper[5024]: I1007 12:50:44.400331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerStarted","Data":"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394"} Oct 07 12:50:44 crc kubenswrapper[5024]: I1007 12:50:44.430638 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mgcfx" podStartSLOduration=2.8571438860000002 podStartE2EDuration="13.430622188s" podCreationTimestamp="2025-10-07 12:50:31 +0000 UTC" firstStartedPulling="2025-10-07 12:50:33.286358316 +0000 UTC m=+1371.362145154" lastFinishedPulling="2025-10-07 12:50:43.859836618 +0000 UTC m=+1381.935623456" observedRunningTime="2025-10-07 12:50:44.430226697 +0000 UTC m=+1382.506013545" watchObservedRunningTime="2025-10-07 12:50:44.430622188 +0000 UTC m=+1382.506409026" Oct 07 12:50:44 crc kubenswrapper[5024]: I1007 12:50:44.668330 5024 scope.go:117] "RemoveContainer" containerID="6a80969d05de5dc169894d66537946fa6ea76bfcb60e67f286b3ce277588a810" Oct 07 12:50:45 crc kubenswrapper[5024]: I1007 12:50:45.151389 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 07 12:50:51 crc kubenswrapper[5024]: I1007 12:50:51.785655 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:51 crc kubenswrapper[5024]: I1007 12:50:51.786396 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:50:52 crc kubenswrapper[5024]: I1007 12:50:52.872910 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mgcfx" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="registry-server" probeResult="failure" output=< Oct 07 12:50:52 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 12:50:52 crc kubenswrapper[5024]: > Oct 07 12:51:01 crc kubenswrapper[5024]: I1007 12:51:01.858396 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:51:01 crc kubenswrapper[5024]: I1007 12:51:01.911964 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:51:02 crc kubenswrapper[5024]: I1007 12:51:02.656540 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:51:03 crc kubenswrapper[5024]: I1007 12:51:03.601868 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mgcfx" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="registry-server" containerID="cri-o://6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394" gracePeriod=2 Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.044625 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.087816 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities\") pod \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.087871 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content\") pod \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.088047 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9lkh\" (UniqueName: \"kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh\") pod \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\" (UID: \"6ecafc0d-941b-45cb-849d-ff3e35aef70f\") " Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.089170 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities" (OuterVolumeSpecName: "utilities") pod "6ecafc0d-941b-45cb-849d-ff3e35aef70f" (UID: "6ecafc0d-941b-45cb-849d-ff3e35aef70f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.093689 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh" (OuterVolumeSpecName: "kube-api-access-v9lkh") pod "6ecafc0d-941b-45cb-849d-ff3e35aef70f" (UID: "6ecafc0d-941b-45cb-849d-ff3e35aef70f"). InnerVolumeSpecName "kube-api-access-v9lkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.182593 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ecafc0d-941b-45cb-849d-ff3e35aef70f" (UID: "6ecafc0d-941b-45cb-849d-ff3e35aef70f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.189915 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.189948 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecafc0d-941b-45cb-849d-ff3e35aef70f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.189960 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9lkh\" (UniqueName: \"kubernetes.io/projected/6ecafc0d-941b-45cb-849d-ff3e35aef70f-kube-api-access-v9lkh\") on node \"crc\" DevicePath \"\"" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.614116 5024 generic.go:334] "Generic (PLEG): container finished" podID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerID="6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394" exitCode=0 Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.614247 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgcfx" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.614280 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerDied","Data":"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394"} Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.614745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgcfx" event={"ID":"6ecafc0d-941b-45cb-849d-ff3e35aef70f","Type":"ContainerDied","Data":"034192792b7ca97428c6f466b5d8d22ac9a2e7ccd1104d70844ce96fb2db65e4"} Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.614766 5024 scope.go:117] "RemoveContainer" containerID="6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.652672 5024 scope.go:117] "RemoveContainer" containerID="6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.666583 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.673363 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mgcfx"] Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.694249 5024 scope.go:117] "RemoveContainer" containerID="819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.727355 5024 scope.go:117] "RemoveContainer" containerID="6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394" Oct 07 12:51:04 crc kubenswrapper[5024]: E1007 12:51:04.727854 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394\": container with ID starting with 6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394 not found: ID does not exist" containerID="6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.727900 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394"} err="failed to get container status \"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394\": rpc error: code = NotFound desc = could not find container \"6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394\": container with ID starting with 6fafc195ce5a6bfe7750d88656cf8dae95dc153954b7550b5396deb689491394 not found: ID does not exist" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.727929 5024 scope.go:117] "RemoveContainer" containerID="6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9" Oct 07 12:51:04 crc kubenswrapper[5024]: E1007 12:51:04.728397 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9\": container with ID starting with 6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9 not found: ID does not exist" containerID="6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.728426 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9"} err="failed to get container status \"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9\": rpc error: code = NotFound desc = could not find container \"6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9\": container with ID starting with 6c4d6331c66ec65de987827ebd67fefe0d5f2c458eef12e46a57a90938451ad9 not found: ID does not exist" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.728452 5024 scope.go:117] "RemoveContainer" containerID="819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5" Oct 07 12:51:04 crc kubenswrapper[5024]: E1007 12:51:04.728677 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5\": container with ID starting with 819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5 not found: ID does not exist" containerID="819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.728704 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5"} err="failed to get container status \"819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5\": rpc error: code = NotFound desc = could not find container \"819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5\": container with ID starting with 819297e988a03c5edbe5c231ed8bcb2bd2bdecdc6baec15fa9f03b760b5593d5 not found: ID does not exist" Oct 07 12:51:04 crc kubenswrapper[5024]: I1007 12:51:04.763904 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" path="/var/lib/kubelet/pods/6ecafc0d-941b-45cb-849d-ff3e35aef70f/volumes" Oct 07 12:51:13 crc kubenswrapper[5024]: I1007 12:51:13.720930 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:51:13 crc kubenswrapper[5024]: I1007 12:51:13.721660 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:51:43 crc kubenswrapper[5024]: I1007 12:51:43.720199 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:51:43 crc kubenswrapper[5024]: I1007 12:51:43.720811 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:51:43 crc kubenswrapper[5024]: I1007 12:51:43.720861 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:51:43 crc kubenswrapper[5024]: I1007 12:51:43.721661 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:51:43 crc kubenswrapper[5024]: I1007 12:51:43.721747 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54" gracePeriod=600 Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.014948 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54" exitCode=0 Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.014989 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54"} Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.015020 5024 scope.go:117] "RemoveContainer" containerID="fd405720319248df31cb182cbf68d7e11b73aa6427c42acbbb531905f6746cbe" Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.752955 5024 scope.go:117] "RemoveContainer" containerID="25c89f56f78b8b88cd2c3df3f916d763412e1939008df36aaafee4fc23cd252c" Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.789043 5024 scope.go:117] "RemoveContainer" containerID="c0a0b0343fd64d8e25099cce68699244689d9756420b81adadf059d849991bce" Oct 07 12:51:44 crc kubenswrapper[5024]: I1007 12:51:44.830062 5024 scope.go:117] "RemoveContainer" containerID="19a0724a20f9cbc32e62f5106339a5004204577740d0c4d13b542b15b6459785" Oct 07 12:51:45 crc kubenswrapper[5024]: I1007 12:51:45.026176 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9"} Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.255900 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:02 crc kubenswrapper[5024]: E1007 12:52:02.256976 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="extract-utilities" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.256996 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="extract-utilities" Oct 07 12:52:02 crc kubenswrapper[5024]: E1007 12:52:02.257015 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="registry-server" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.257024 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="registry-server" Oct 07 12:52:02 crc kubenswrapper[5024]: E1007 12:52:02.257064 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="extract-content" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.257074 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="extract-content" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.257381 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecafc0d-941b-45cb-849d-ff3e35aef70f" containerName="registry-server" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.259153 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.266516 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.301502 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.301635 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzrp\" (UniqueName: \"kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.301690 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.403863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzrp\" (UniqueName: \"kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.403934 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.404065 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.404723 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.404782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.425989 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzrp\" (UniqueName: \"kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp\") pod \"redhat-marketplace-gxpvv\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:02 crc kubenswrapper[5024]: I1007 12:52:02.606775 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:03 crc kubenswrapper[5024]: I1007 12:52:03.031607 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:03 crc kubenswrapper[5024]: I1007 12:52:03.172458 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerStarted","Data":"16f532e5e436f1ab23a24f2a840f0c9550f346826ccdd21020667811d0ee97dd"} Oct 07 12:52:04 crc kubenswrapper[5024]: I1007 12:52:04.182460 5024 generic.go:334] "Generic (PLEG): container finished" podID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerID="468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636" exitCode=0 Oct 07 12:52:04 crc kubenswrapper[5024]: I1007 12:52:04.182543 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerDied","Data":"468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636"} Oct 07 12:52:05 crc kubenswrapper[5024]: I1007 12:52:05.194770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerStarted","Data":"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92"} Oct 07 12:52:06 crc kubenswrapper[5024]: I1007 12:52:06.206739 5024 generic.go:334] "Generic (PLEG): container finished" podID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerID="15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92" exitCode=0 Oct 07 12:52:06 crc kubenswrapper[5024]: I1007 12:52:06.206793 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerDied","Data":"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92"} Oct 07 12:52:07 crc kubenswrapper[5024]: I1007 12:52:07.218404 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerStarted","Data":"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81"} Oct 07 12:52:07 crc kubenswrapper[5024]: I1007 12:52:07.243484 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxpvv" podStartSLOduration=2.695348631 podStartE2EDuration="5.24346525s" podCreationTimestamp="2025-10-07 12:52:02 +0000 UTC" firstStartedPulling="2025-10-07 12:52:04.184185119 +0000 UTC m=+1462.259971957" lastFinishedPulling="2025-10-07 12:52:06.732301728 +0000 UTC m=+1464.808088576" observedRunningTime="2025-10-07 12:52:07.237510806 +0000 UTC m=+1465.313297644" watchObservedRunningTime="2025-10-07 12:52:07.24346525 +0000 UTC m=+1465.319252089" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.634359 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.636534 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.644750 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.838120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tf8d\" (UniqueName: \"kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.838203 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.838303 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.939667 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tf8d\" (UniqueName: \"kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.939737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.939827 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.940415 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.940535 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:09 crc kubenswrapper[5024]: I1007 12:52:09.975528 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tf8d\" (UniqueName: \"kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d\") pod \"certified-operators-t6dzs\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:10 crc kubenswrapper[5024]: I1007 12:52:10.270293 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:10 crc kubenswrapper[5024]: I1007 12:52:10.706467 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:10 crc kubenswrapper[5024]: W1007 12:52:10.711309 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f7fd3a4_b41e_45bf_975a_6f5950abd601.slice/crio-b611cea57e0668ea1ddc947e93c2cb8ef0af2e0950d713199009812fc34832a6 WatchSource:0}: Error finding container b611cea57e0668ea1ddc947e93c2cb8ef0af2e0950d713199009812fc34832a6: Status 404 returned error can't find the container with id b611cea57e0668ea1ddc947e93c2cb8ef0af2e0950d713199009812fc34832a6 Oct 07 12:52:11 crc kubenswrapper[5024]: I1007 12:52:11.249997 5024 generic.go:334] "Generic (PLEG): container finished" podID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerID="3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b" exitCode=0 Oct 07 12:52:11 crc kubenswrapper[5024]: I1007 12:52:11.250038 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerDied","Data":"3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b"} Oct 07 12:52:11 crc kubenswrapper[5024]: I1007 12:52:11.250063 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerStarted","Data":"b611cea57e0668ea1ddc947e93c2cb8ef0af2e0950d713199009812fc34832a6"} Oct 07 12:52:12 crc kubenswrapper[5024]: I1007 12:52:12.607919 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:12 crc kubenswrapper[5024]: I1007 12:52:12.608268 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:12 crc kubenswrapper[5024]: I1007 12:52:12.650833 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:13 crc kubenswrapper[5024]: I1007 12:52:13.276461 5024 generic.go:334] "Generic (PLEG): container finished" podID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerID="8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2" exitCode=0 Oct 07 12:52:13 crc kubenswrapper[5024]: I1007 12:52:13.276533 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerDied","Data":"8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2"} Oct 07 12:52:13 crc kubenswrapper[5024]: I1007 12:52:13.323123 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:14 crc kubenswrapper[5024]: I1007 12:52:14.221646 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.296876 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxpvv" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="registry-server" containerID="cri-o://d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81" gracePeriod=2 Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.297768 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerStarted","Data":"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59"} Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.323606 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6dzs" podStartSLOduration=2.658078898 podStartE2EDuration="6.323584679s" podCreationTimestamp="2025-10-07 12:52:09 +0000 UTC" firstStartedPulling="2025-10-07 12:52:11.25179084 +0000 UTC m=+1469.327577668" lastFinishedPulling="2025-10-07 12:52:14.917296611 +0000 UTC m=+1472.993083449" observedRunningTime="2025-10-07 12:52:15.319632343 +0000 UTC m=+1473.395419171" watchObservedRunningTime="2025-10-07 12:52:15.323584679 +0000 UTC m=+1473.399371517" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.742556 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.849429 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities\") pod \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.849976 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztzrp\" (UniqueName: \"kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp\") pod \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.850157 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content\") pod \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\" (UID: \"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7\") " Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.850342 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities" (OuterVolumeSpecName: "utilities") pod "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" (UID: "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.850719 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.859213 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp" (OuterVolumeSpecName: "kube-api-access-ztzrp") pod "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" (UID: "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7"). InnerVolumeSpecName "kube-api-access-ztzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.863965 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" (UID: "fa6fe831-5d3c-46a5-8590-ce5e7caf77c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.952121 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:15 crc kubenswrapper[5024]: I1007 12:52:15.952235 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztzrp\" (UniqueName: \"kubernetes.io/projected/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7-kube-api-access-ztzrp\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.310192 5024 generic.go:334] "Generic (PLEG): container finished" podID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerID="d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81" exitCode=0 Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.310234 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxpvv" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.310320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerDied","Data":"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81"} Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.310383 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxpvv" event={"ID":"fa6fe831-5d3c-46a5-8590-ce5e7caf77c7","Type":"ContainerDied","Data":"16f532e5e436f1ab23a24f2a840f0c9550f346826ccdd21020667811d0ee97dd"} Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.310403 5024 scope.go:117] "RemoveContainer" containerID="d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.348529 5024 scope.go:117] "RemoveContainer" containerID="15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.349646 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.369263 5024 scope.go:117] "RemoveContainer" containerID="468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.376862 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxpvv"] Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.407528 5024 scope.go:117] "RemoveContainer" containerID="d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81" Oct 07 12:52:16 crc kubenswrapper[5024]: E1007 12:52:16.407955 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81\": container with ID starting with d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81 not found: ID does not exist" containerID="d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.407986 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81"} err="failed to get container status \"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81\": rpc error: code = NotFound desc = could not find container \"d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81\": container with ID starting with d265f9c13ac89d9e3b8522ff036f1d4dc4c0877f4252664241cd4a963f8e8a81 not found: ID does not exist" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.408005 5024 scope.go:117] "RemoveContainer" containerID="15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92" Oct 07 12:52:16 crc kubenswrapper[5024]: E1007 12:52:16.408239 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92\": container with ID starting with 15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92 not found: ID does not exist" containerID="15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.408261 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92"} err="failed to get container status \"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92\": rpc error: code = NotFound desc = could not find container \"15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92\": container with ID starting with 15e405fd55974ba41f292a1cf830b483915089edeecd7f2d88985850f03cdd92 not found: ID does not exist" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.408273 5024 scope.go:117] "RemoveContainer" containerID="468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636" Oct 07 12:52:16 crc kubenswrapper[5024]: E1007 12:52:16.408523 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636\": container with ID starting with 468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636 not found: ID does not exist" containerID="468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.408546 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636"} err="failed to get container status \"468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636\": rpc error: code = NotFound desc = could not find container \"468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636\": container with ID starting with 468a7a37d3b4ed1303b91587e4f2c019fa6f1d9227965ce1542fe7edca34f636 not found: ID does not exist" Oct 07 12:52:16 crc kubenswrapper[5024]: I1007 12:52:16.764916 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" path="/var/lib/kubelet/pods/fa6fe831-5d3c-46a5-8590-ce5e7caf77c7/volumes" Oct 07 12:52:20 crc kubenswrapper[5024]: I1007 12:52:20.270674 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:20 crc kubenswrapper[5024]: I1007 12:52:20.271406 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:20 crc kubenswrapper[5024]: I1007 12:52:20.316829 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:20 crc kubenswrapper[5024]: I1007 12:52:20.400874 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:21 crc kubenswrapper[5024]: I1007 12:52:21.021877 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.372411 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6dzs" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="registry-server" containerID="cri-o://d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59" gracePeriod=2 Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.803242 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.982045 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities\") pod \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.982227 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tf8d\" (UniqueName: \"kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d\") pod \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.982266 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content\") pod \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\" (UID: \"1f7fd3a4-b41e-45bf-975a-6f5950abd601\") " Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.984299 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities" (OuterVolumeSpecName: "utilities") pod "1f7fd3a4-b41e-45bf-975a-6f5950abd601" (UID: "1f7fd3a4-b41e-45bf-975a-6f5950abd601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:52:22 crc kubenswrapper[5024]: I1007 12:52:22.995287 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d" (OuterVolumeSpecName: "kube-api-access-4tf8d") pod "1f7fd3a4-b41e-45bf-975a-6f5950abd601" (UID: "1f7fd3a4-b41e-45bf-975a-6f5950abd601"). InnerVolumeSpecName "kube-api-access-4tf8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.086325 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.086393 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tf8d\" (UniqueName: \"kubernetes.io/projected/1f7fd3a4-b41e-45bf-975a-6f5950abd601-kube-api-access-4tf8d\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.385276 5024 generic.go:334] "Generic (PLEG): container finished" podID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerID="d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59" exitCode=0 Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.385326 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerDied","Data":"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59"} Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.385356 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6dzs" event={"ID":"1f7fd3a4-b41e-45bf-975a-6f5950abd601","Type":"ContainerDied","Data":"b611cea57e0668ea1ddc947e93c2cb8ef0af2e0950d713199009812fc34832a6"} Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.385377 5024 scope.go:117] "RemoveContainer" containerID="d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.385541 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6dzs" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.411830 5024 scope.go:117] "RemoveContainer" containerID="8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.455956 5024 scope.go:117] "RemoveContainer" containerID="3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.494029 5024 scope.go:117] "RemoveContainer" containerID="d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59" Oct 07 12:52:23 crc kubenswrapper[5024]: E1007 12:52:23.494515 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59\": container with ID starting with d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59 not found: ID does not exist" containerID="d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.494564 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59"} err="failed to get container status \"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59\": rpc error: code = NotFound desc = could not find container \"d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59\": container with ID starting with d30dd1bbc2f6d7468d28d2138c5d4b3c77130b850d3205e97676c270acc9ea59 not found: ID does not exist" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.494594 5024 scope.go:117] "RemoveContainer" containerID="8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2" Oct 07 12:52:23 crc kubenswrapper[5024]: E1007 12:52:23.495385 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2\": container with ID starting with 8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2 not found: ID does not exist" containerID="8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.495561 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2"} err="failed to get container status \"8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2\": rpc error: code = NotFound desc = could not find container \"8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2\": container with ID starting with 8040b3d8557d43d31f3e18f776f68bcb6b950c9a1a9532fd454efc30262ce5b2 not found: ID does not exist" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.495720 5024 scope.go:117] "RemoveContainer" containerID="3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b" Oct 07 12:52:23 crc kubenswrapper[5024]: E1007 12:52:23.496311 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b\": container with ID starting with 3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b not found: ID does not exist" containerID="3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b" Oct 07 12:52:23 crc kubenswrapper[5024]: I1007 12:52:23.496333 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b"} err="failed to get container status \"3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b\": rpc error: code = NotFound desc = could not find container \"3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b\": container with ID starting with 3218fc29978aa5e7e0d846c9ab30d9e8b286627d19ae9dc20c64367a57410c1b not found: ID does not exist" Oct 07 12:52:24 crc kubenswrapper[5024]: I1007 12:52:24.022148 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f7fd3a4-b41e-45bf-975a-6f5950abd601" (UID: "1f7fd3a4-b41e-45bf-975a-6f5950abd601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:52:24 crc kubenswrapper[5024]: I1007 12:52:24.114929 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7fd3a4-b41e-45bf-975a-6f5950abd601-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:52:24 crc kubenswrapper[5024]: I1007 12:52:24.327338 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:24 crc kubenswrapper[5024]: I1007 12:52:24.336822 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6dzs"] Oct 07 12:52:24 crc kubenswrapper[5024]: I1007 12:52:24.762244 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" path="/var/lib/kubelet/pods/1f7fd3a4-b41e-45bf-975a-6f5950abd601/volumes" Oct 07 12:52:44 crc kubenswrapper[5024]: I1007 12:52:44.929957 5024 scope.go:117] "RemoveContainer" containerID="6902d138bb396eac85623e13d1fbf4cbf89103d17aa20f39618feaee6ce7711f" Oct 07 12:52:44 crc kubenswrapper[5024]: I1007 12:52:44.969998 5024 scope.go:117] "RemoveContainer" containerID="a1ef91e314a58792ea367d5d9e6b03190fb8f98df4e87c9055051b0e41a4e102" Oct 07 12:52:45 crc kubenswrapper[5024]: I1007 12:52:45.007677 5024 scope.go:117] "RemoveContainer" containerID="643bb3bb8da669b60ac194133a40cfe326531afd37a14512629f1d6815e678c1" Oct 07 12:53:55 crc kubenswrapper[5024]: I1007 12:53:55.240223 5024 generic.go:334] "Generic (PLEG): container finished" podID="9f04d138-3e0f-47a6-8bb5-3488ec712d2d" containerID="7184743f32f7e166451ff044b3658ff713a1db62f33ee9ce896760c3faeb0b57" exitCode=0 Oct 07 12:53:55 crc kubenswrapper[5024]: I1007 12:53:55.240285 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" event={"ID":"9f04d138-3e0f-47a6-8bb5-3488ec712d2d","Type":"ContainerDied","Data":"7184743f32f7e166451ff044b3658ff713a1db62f33ee9ce896760c3faeb0b57"} Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.594531 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.694970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key\") pod \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.695272 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle\") pod \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.695345 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqf9\" (UniqueName: \"kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9\") pod \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.700791 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9" (OuterVolumeSpecName: "kube-api-access-xkqf9") pod "9f04d138-3e0f-47a6-8bb5-3488ec712d2d" (UID: "9f04d138-3e0f-47a6-8bb5-3488ec712d2d"). InnerVolumeSpecName "kube-api-access-xkqf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.700821 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9f04d138-3e0f-47a6-8bb5-3488ec712d2d" (UID: "9f04d138-3e0f-47a6-8bb5-3488ec712d2d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.719810 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9f04d138-3e0f-47a6-8bb5-3488ec712d2d" (UID: "9f04d138-3e0f-47a6-8bb5-3488ec712d2d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.796404 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory\") pod \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\" (UID: \"9f04d138-3e0f-47a6-8bb5-3488ec712d2d\") " Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.797127 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqf9\" (UniqueName: \"kubernetes.io/projected/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-kube-api-access-xkqf9\") on node \"crc\" DevicePath \"\"" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.797240 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.797312 5024 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.819450 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory" (OuterVolumeSpecName: "inventory") pod "9f04d138-3e0f-47a6-8bb5-3488ec712d2d" (UID: "9f04d138-3e0f-47a6-8bb5-3488ec712d2d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:53:56 crc kubenswrapper[5024]: I1007 12:53:56.900711 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f04d138-3e0f-47a6-8bb5-3488ec712d2d-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.262370 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" event={"ID":"9f04d138-3e0f-47a6-8bb5-3488ec712d2d","Type":"ContainerDied","Data":"fc34c7d215071c4717c3daab5c8e900e67c72aaf2dd1b8910b9d8b38c505e25a"} Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.262416 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc34c7d215071c4717c3daab5c8e900e67c72aaf2dd1b8910b9d8b38c505e25a" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.262502 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.342825 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n"] Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343524 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343556 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343592 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="extract-utilities" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343606 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="extract-utilities" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343630 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="extract-utilities" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343645 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="extract-utilities" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343685 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343698 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343719 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f04d138-3e0f-47a6-8bb5-3488ec712d2d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343736 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f04d138-3e0f-47a6-8bb5-3488ec712d2d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343753 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="extract-content" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343765 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="extract-content" Oct 07 12:53:57 crc kubenswrapper[5024]: E1007 12:53:57.343795 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="extract-content" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.343808 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="extract-content" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.344185 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f04d138-3e0f-47a6-8bb5-3488ec712d2d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.344222 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7fd3a4-b41e-45bf-975a-6f5950abd601" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.344240 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6fe831-5d3c-46a5-8590-ce5e7caf77c7" containerName="registry-server" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.345477 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.347568 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.347706 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.348233 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.349834 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.354971 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n"] Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.510991 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9ff\" (UniqueName: \"kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.511073 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.511324 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.613262 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9ff\" (UniqueName: \"kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.613316 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.613352 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.618340 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.618975 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.637193 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9ff\" (UniqueName: \"kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2f67n\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:57 crc kubenswrapper[5024]: I1007 12:53:57.669182 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:53:58 crc kubenswrapper[5024]: I1007 12:53:58.236270 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n"] Oct 07 12:53:58 crc kubenswrapper[5024]: I1007 12:53:58.271613 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" event={"ID":"66e17556-df1d-48d3-b70a-9fe70ca23500","Type":"ContainerStarted","Data":"c4f22fdb459a23ab2d4c358b3f4ebdcd7cf0a4c78ca5559cb6ee9a8fe417d6e4"} Oct 07 12:53:59 crc kubenswrapper[5024]: I1007 12:53:59.281620 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" event={"ID":"66e17556-df1d-48d3-b70a-9fe70ca23500","Type":"ContainerStarted","Data":"00c612a88448688b35e79421d7447906b42bab903114840f6ba7d2ad9d0fe06d"} Oct 07 12:53:59 crc kubenswrapper[5024]: I1007 12:53:59.309956 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" podStartSLOduration=1.677413113 podStartE2EDuration="2.309933772s" podCreationTimestamp="2025-10-07 12:53:57 +0000 UTC" firstStartedPulling="2025-10-07 12:53:58.239287111 +0000 UTC m=+1576.315073949" lastFinishedPulling="2025-10-07 12:53:58.87180777 +0000 UTC m=+1576.947594608" observedRunningTime="2025-10-07 12:53:59.29860527 +0000 UTC m=+1577.374392148" watchObservedRunningTime="2025-10-07 12:53:59.309933772 +0000 UTC m=+1577.385720620" Oct 07 12:54:13 crc kubenswrapper[5024]: I1007 12:54:13.720587 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:54:13 crc kubenswrapper[5024]: I1007 12:54:13.721246 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:54:43 crc kubenswrapper[5024]: I1007 12:54:43.720221 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:54:43 crc kubenswrapper[5024]: I1007 12:54:43.720804 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.063370 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dgrgv"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.082742 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s9nrp"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.090518 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6m24h"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.098814 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s9nrp"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.105721 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6m24h"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.112352 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dgrgv"] Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.720229 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.720289 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.720337 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.720859 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.720920 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" gracePeriod=600 Oct 07 12:55:13 crc kubenswrapper[5024]: E1007 12:55:13.841103 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.944203 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" exitCode=0 Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.944413 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9"} Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.944668 5024 scope.go:117] "RemoveContainer" containerID="07ea9542ad73f2a8df6c66c3f061d1b2b70707d49cc55df226190bf69e9c5f54" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.945773 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:55:13 crc kubenswrapper[5024]: E1007 12:55:13.946372 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.947027 5024 generic.go:334] "Generic (PLEG): container finished" podID="66e17556-df1d-48d3-b70a-9fe70ca23500" containerID="00c612a88448688b35e79421d7447906b42bab903114840f6ba7d2ad9d0fe06d" exitCode=0 Oct 07 12:55:13 crc kubenswrapper[5024]: I1007 12:55:13.947073 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" event={"ID":"66e17556-df1d-48d3-b70a-9fe70ca23500","Type":"ContainerDied","Data":"00c612a88448688b35e79421d7447906b42bab903114840f6ba7d2ad9d0fe06d"} Oct 07 12:55:14 crc kubenswrapper[5024]: I1007 12:55:14.760638 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134d1f22-ab85-4918-9d60-3c39f1d2f66e" path="/var/lib/kubelet/pods/134d1f22-ab85-4918-9d60-3c39f1d2f66e/volumes" Oct 07 12:55:14 crc kubenswrapper[5024]: I1007 12:55:14.761871 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d" path="/var/lib/kubelet/pods/9ab3a6dc-e122-4ea7-8a9e-b6e208d5a66d/volumes" Oct 07 12:55:14 crc kubenswrapper[5024]: I1007 12:55:14.762410 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5d8fec-7318-4048-82bd-fef760cc6a57" path="/var/lib/kubelet/pods/ec5d8fec-7318-4048-82bd-fef760cc6a57/volumes" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.375174 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.464517 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj9ff\" (UniqueName: \"kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff\") pod \"66e17556-df1d-48d3-b70a-9fe70ca23500\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.464579 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory\") pod \"66e17556-df1d-48d3-b70a-9fe70ca23500\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.464612 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key\") pod \"66e17556-df1d-48d3-b70a-9fe70ca23500\" (UID: \"66e17556-df1d-48d3-b70a-9fe70ca23500\") " Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.470654 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff" (OuterVolumeSpecName: "kube-api-access-lj9ff") pod "66e17556-df1d-48d3-b70a-9fe70ca23500" (UID: "66e17556-df1d-48d3-b70a-9fe70ca23500"). InnerVolumeSpecName "kube-api-access-lj9ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.492428 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "66e17556-df1d-48d3-b70a-9fe70ca23500" (UID: "66e17556-df1d-48d3-b70a-9fe70ca23500"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.492668 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory" (OuterVolumeSpecName: "inventory") pod "66e17556-df1d-48d3-b70a-9fe70ca23500" (UID: "66e17556-df1d-48d3-b70a-9fe70ca23500"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.568183 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj9ff\" (UniqueName: \"kubernetes.io/projected/66e17556-df1d-48d3-b70a-9fe70ca23500-kube-api-access-lj9ff\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.568204 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.568213 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66e17556-df1d-48d3-b70a-9fe70ca23500-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.969929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" event={"ID":"66e17556-df1d-48d3-b70a-9fe70ca23500","Type":"ContainerDied","Data":"c4f22fdb459a23ab2d4c358b3f4ebdcd7cf0a4c78ca5559cb6ee9a8fe417d6e4"} Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.969990 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f22fdb459a23ab2d4c358b3f4ebdcd7cf0a4c78ca5559cb6ee9a8fe417d6e4" Oct 07 12:55:15 crc kubenswrapper[5024]: I1007 12:55:15.969993 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.109960 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc"] Oct 07 12:55:16 crc kubenswrapper[5024]: E1007 12:55:16.111037 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e17556-df1d-48d3-b70a-9fe70ca23500" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.111065 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e17556-df1d-48d3-b70a-9fe70ca23500" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.111338 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e17556-df1d-48d3-b70a-9fe70ca23500" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.111994 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.114268 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.114517 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.114773 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.115708 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.155923 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc"] Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.178871 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.178939 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.179310 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbx59\" (UniqueName: \"kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.281371 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.281444 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.281519 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbx59\" (UniqueName: \"kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.287092 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.291627 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.299971 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbx59\" (UniqueName: \"kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.444821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.977548 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc"] Oct 07 12:55:16 crc kubenswrapper[5024]: I1007 12:55:16.982531 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 12:55:17 crc kubenswrapper[5024]: I1007 12:55:17.990887 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" event={"ID":"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80","Type":"ContainerStarted","Data":"5660f0861b1102a8db6957f35e9fda0da1e9827c1c4a1436efe162f88b8f50cd"} Oct 07 12:55:17 crc kubenswrapper[5024]: I1007 12:55:17.991352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" event={"ID":"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80","Type":"ContainerStarted","Data":"566b063db2d52eadf80c92b389e698e924f6c05193716a70ec5723913bc84508"} Oct 07 12:55:23 crc kubenswrapper[5024]: I1007 12:55:23.031804 5024 generic.go:334] "Generic (PLEG): container finished" podID="ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" containerID="5660f0861b1102a8db6957f35e9fda0da1e9827c1c4a1436efe162f88b8f50cd" exitCode=0 Oct 07 12:55:23 crc kubenswrapper[5024]: I1007 12:55:23.031892 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" event={"ID":"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80","Type":"ContainerDied","Data":"5660f0861b1102a8db6957f35e9fda0da1e9827c1c4a1436efe162f88b8f50cd"} Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.480799 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.544528 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory\") pod \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.544593 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbx59\" (UniqueName: \"kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59\") pod \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.544690 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key\") pod \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\" (UID: \"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80\") " Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.550378 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59" (OuterVolumeSpecName: "kube-api-access-gbx59") pod "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" (UID: "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80"). InnerVolumeSpecName "kube-api-access-gbx59". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.572634 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" (UID: "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.574590 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory" (OuterVolumeSpecName: "inventory") pod "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" (UID: "ce6b8f22-a957-4a3c-b31c-6d9433ce6c80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.646418 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbx59\" (UniqueName: \"kubernetes.io/projected/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-kube-api-access-gbx59\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.646636 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:24 crc kubenswrapper[5024]: I1007 12:55:24.646648 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.061930 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" event={"ID":"ce6b8f22-a957-4a3c-b31c-6d9433ce6c80","Type":"ContainerDied","Data":"566b063db2d52eadf80c92b389e698e924f6c05193716a70ec5723913bc84508"} Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.062009 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="566b063db2d52eadf80c92b389e698e924f6c05193716a70ec5723913bc84508" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.062041 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.122048 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm"] Oct 07 12:55:25 crc kubenswrapper[5024]: E1007 12:55:25.122628 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.122651 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.122885 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.123813 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.128872 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.128932 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.129064 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.129535 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.132507 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm"] Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.156373 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4lk\" (UniqueName: \"kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.156458 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.156555 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.257612 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4lk\" (UniqueName: \"kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.257766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.257978 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.262613 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.263062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.283573 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4lk\" (UniqueName: \"kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqvpm\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.443404 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:55:25 crc kubenswrapper[5024]: I1007 12:55:25.945918 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm"] Oct 07 12:55:26 crc kubenswrapper[5024]: I1007 12:55:26.071548 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" event={"ID":"e8e73851-73a1-4e34-936f-dc608e4f490b","Type":"ContainerStarted","Data":"f1ab993d76deb53e9fd9c937bd0b26da3aca4aaa5310c7747fe348daf0a0cb49"} Oct 07 12:55:26 crc kubenswrapper[5024]: I1007 12:55:26.752410 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:55:26 crc kubenswrapper[5024]: E1007 12:55:26.753072 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:55:27 crc kubenswrapper[5024]: I1007 12:55:27.081784 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" event={"ID":"e8e73851-73a1-4e34-936f-dc608e4f490b","Type":"ContainerStarted","Data":"d719ec6b642856c166681ace7bd66a4db4b7d023db1ebc6a8cd081870d920f0f"} Oct 07 12:55:27 crc kubenswrapper[5024]: I1007 12:55:27.108570 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" podStartSLOduration=1.691468037 podStartE2EDuration="2.10854921s" podCreationTimestamp="2025-10-07 12:55:25 +0000 UTC" firstStartedPulling="2025-10-07 12:55:25.952127586 +0000 UTC m=+1664.027914424" lastFinishedPulling="2025-10-07 12:55:26.369208759 +0000 UTC m=+1664.444995597" observedRunningTime="2025-10-07 12:55:27.094618022 +0000 UTC m=+1665.170404880" watchObservedRunningTime="2025-10-07 12:55:27.10854921 +0000 UTC m=+1665.184336048" Oct 07 12:55:28 crc kubenswrapper[5024]: I1007 12:55:28.028995 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ed35-account-create-d6lw4"] Oct 07 12:55:28 crc kubenswrapper[5024]: I1007 12:55:28.036768 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ed35-account-create-d6lw4"] Oct 07 12:55:28 crc kubenswrapper[5024]: I1007 12:55:28.761878 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eab15ce-c0a2-44d0-822e-f3b1ca4de908" path="/var/lib/kubelet/pods/2eab15ce-c0a2-44d0-822e-f3b1ca4de908/volumes" Oct 07 12:55:29 crc kubenswrapper[5024]: I1007 12:55:29.024595 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5f9a-account-create-wsc9c"] Oct 07 12:55:29 crc kubenswrapper[5024]: I1007 12:55:29.032340 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2510-account-create-z66gk"] Oct 07 12:55:29 crc kubenswrapper[5024]: I1007 12:55:29.040510 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5f9a-account-create-wsc9c"] Oct 07 12:55:29 crc kubenswrapper[5024]: I1007 12:55:29.047864 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2510-account-create-z66gk"] Oct 07 12:55:30 crc kubenswrapper[5024]: I1007 12:55:30.764239 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475c1ed6-4adb-4aa3-bb17-f4a41d8a7470" path="/var/lib/kubelet/pods/475c1ed6-4adb-4aa3-bb17-f4a41d8a7470/volumes" Oct 07 12:55:30 crc kubenswrapper[5024]: I1007 12:55:30.766713 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61" path="/var/lib/kubelet/pods/afdbfdf7-cc14-4ff3-bd1a-6475b8f4ca61/volumes" Oct 07 12:55:41 crc kubenswrapper[5024]: I1007 12:55:41.752390 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:55:41 crc kubenswrapper[5024]: E1007 12:55:41.753560 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.180870 5024 scope.go:117] "RemoveContainer" containerID="77b8511fc7369531f2e543e5f5d22eee7e99254ba4b9152da8ec4bb655e8ee37" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.208895 5024 scope.go:117] "RemoveContainer" containerID="b439e414ae7c91dce7e818c6914f63349936a3c84a0339e3c63182a312547abc" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.258639 5024 scope.go:117] "RemoveContainer" containerID="d7c80720f0c8f5bb76d36ef7b83db0f216df1b18106b499129ed5993b41c7c80" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.316086 5024 scope.go:117] "RemoveContainer" containerID="fab2cf02b8cc3e68ddff4368f2ec64e1a807a0c2bb32b3721042d51e6c6bf952" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.346524 5024 scope.go:117] "RemoveContainer" containerID="1dae479b3a729849c72bd1a3a39923490522039353db4ae65cb95a4249eb31b2" Oct 07 12:55:45 crc kubenswrapper[5024]: I1007 12:55:45.407628 5024 scope.go:117] "RemoveContainer" containerID="d73fa9175e114156745a5c5800be1356370d31676851805968192ea8d4589dd0" Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.056045 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bpd9r"] Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.066749 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qq5t7"] Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.078673 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jt8lj"] Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.089853 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qq5t7"] Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.097321 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bpd9r"] Oct 07 12:55:47 crc kubenswrapper[5024]: I1007 12:55:47.104024 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jt8lj"] Oct 07 12:55:48 crc kubenswrapper[5024]: I1007 12:55:48.768383 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7d4a23-1940-46f1-9555-dc1eb0154137" path="/var/lib/kubelet/pods/2e7d4a23-1940-46f1-9555-dc1eb0154137/volumes" Oct 07 12:55:48 crc kubenswrapper[5024]: I1007 12:55:48.769455 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318e6758-8658-4a44-a89e-c663cb02d9f8" path="/var/lib/kubelet/pods/318e6758-8658-4a44-a89e-c663cb02d9f8/volumes" Oct 07 12:55:48 crc kubenswrapper[5024]: I1007 12:55:48.770170 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ac3513-8a23-45c3-a80e-de4304c2f967" path="/var/lib/kubelet/pods/44ac3513-8a23-45c3-a80e-de4304c2f967/volumes" Oct 07 12:55:55 crc kubenswrapper[5024]: I1007 12:55:55.030406 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-csbb9"] Oct 07 12:55:55 crc kubenswrapper[5024]: I1007 12:55:55.039493 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-csbb9"] Oct 07 12:55:56 crc kubenswrapper[5024]: I1007 12:55:56.752183 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:55:56 crc kubenswrapper[5024]: E1007 12:55:56.752853 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:55:56 crc kubenswrapper[5024]: I1007 12:55:56.763211 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2b23db-8959-4c57-bb68-0823d7c75a17" path="/var/lib/kubelet/pods/0f2b23db-8959-4c57-bb68-0823d7c75a17/volumes" Oct 07 12:56:06 crc kubenswrapper[5024]: I1007 12:56:06.471830 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8e73851-73a1-4e34-936f-dc608e4f490b" containerID="d719ec6b642856c166681ace7bd66a4db4b7d023db1ebc6a8cd081870d920f0f" exitCode=0 Oct 07 12:56:06 crc kubenswrapper[5024]: I1007 12:56:06.471953 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" event={"ID":"e8e73851-73a1-4e34-936f-dc608e4f490b","Type":"ContainerDied","Data":"d719ec6b642856c166681ace7bd66a4db4b7d023db1ebc6a8cd081870d920f0f"} Oct 07 12:56:07 crc kubenswrapper[5024]: I1007 12:56:07.867271 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:56:07 crc kubenswrapper[5024]: I1007 12:56:07.983856 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory\") pod \"e8e73851-73a1-4e34-936f-dc608e4f490b\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " Oct 07 12:56:07 crc kubenswrapper[5024]: I1007 12:56:07.983913 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key\") pod \"e8e73851-73a1-4e34-936f-dc608e4f490b\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " Oct 07 12:56:07 crc kubenswrapper[5024]: I1007 12:56:07.984020 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4lk\" (UniqueName: \"kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk\") pod \"e8e73851-73a1-4e34-936f-dc608e4f490b\" (UID: \"e8e73851-73a1-4e34-936f-dc608e4f490b\") " Oct 07 12:56:07 crc kubenswrapper[5024]: I1007 12:56:07.989064 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk" (OuterVolumeSpecName: "kube-api-access-ff4lk") pod "e8e73851-73a1-4e34-936f-dc608e4f490b" (UID: "e8e73851-73a1-4e34-936f-dc608e4f490b"). InnerVolumeSpecName "kube-api-access-ff4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.012348 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e8e73851-73a1-4e34-936f-dc608e4f490b" (UID: "e8e73851-73a1-4e34-936f-dc608e4f490b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.026852 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory" (OuterVolumeSpecName: "inventory") pod "e8e73851-73a1-4e34-936f-dc608e4f490b" (UID: "e8e73851-73a1-4e34-936f-dc608e4f490b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.085445 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.085492 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4lk\" (UniqueName: \"kubernetes.io/projected/e8e73851-73a1-4e34-936f-dc608e4f490b-kube-api-access-ff4lk\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.085510 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8e73851-73a1-4e34-936f-dc608e4f490b-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.493342 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" event={"ID":"e8e73851-73a1-4e34-936f-dc608e4f490b","Type":"ContainerDied","Data":"f1ab993d76deb53e9fd9c937bd0b26da3aca4aaa5310c7747fe348daf0a0cb49"} Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.493395 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.493420 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ab993d76deb53e9fd9c937bd0b26da3aca4aaa5310c7747fe348daf0a0cb49" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.576392 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx"] Oct 07 12:56:08 crc kubenswrapper[5024]: E1007 12:56:08.576836 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e73851-73a1-4e34-936f-dc608e4f490b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.576856 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e73851-73a1-4e34-936f-dc608e4f490b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.577065 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e73851-73a1-4e34-936f-dc608e4f490b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.577686 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.579962 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.579981 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.580373 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.580507 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.594002 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx"] Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.695771 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.696374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpp7\" (UniqueName: \"kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.696574 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.798781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnpp7\" (UniqueName: \"kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.799080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.799101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.806096 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.807790 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.815794 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnpp7\" (UniqueName: \"kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:08 crc kubenswrapper[5024]: I1007 12:56:08.894022 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:09 crc kubenswrapper[5024]: I1007 12:56:09.383637 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx"] Oct 07 12:56:09 crc kubenswrapper[5024]: I1007 12:56:09.502195 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" event={"ID":"acfe274e-e517-4ccb-8539-3d3e0f87ad2b","Type":"ContainerStarted","Data":"d29ee69cdf9d25a38f85d2cd92e7bcc749d835f15f78396448731f82d0f18aa9"} Oct 07 12:56:09 crc kubenswrapper[5024]: I1007 12:56:09.751744 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:56:09 crc kubenswrapper[5024]: E1007 12:56:09.752091 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:56:10 crc kubenswrapper[5024]: I1007 12:56:10.512433 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" event={"ID":"acfe274e-e517-4ccb-8539-3d3e0f87ad2b","Type":"ContainerStarted","Data":"aed39f75fbb9905306d4f1aeaa21a69f69e75ca142f47b21ec34d6326fb51ec2"} Oct 07 12:56:10 crc kubenswrapper[5024]: I1007 12:56:10.535393 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" podStartSLOduration=1.948572871 podStartE2EDuration="2.535371469s" podCreationTimestamp="2025-10-07 12:56:08 +0000 UTC" firstStartedPulling="2025-10-07 12:56:09.386940469 +0000 UTC m=+1707.462727307" lastFinishedPulling="2025-10-07 12:56:09.973739067 +0000 UTC m=+1708.049525905" observedRunningTime="2025-10-07 12:56:10.528765606 +0000 UTC m=+1708.604552444" watchObservedRunningTime="2025-10-07 12:56:10.535371469 +0000 UTC m=+1708.611158307" Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.069430 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5c90-account-create-gb4jv"] Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.084547 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e88c-account-create-pdvhd"] Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.095028 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5c90-account-create-gb4jv"] Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.108009 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f874-account-create-5s8rq"] Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.120322 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e88c-account-create-pdvhd"] Oct 07 12:56:11 crc kubenswrapper[5024]: I1007 12:56:11.130405 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f874-account-create-5s8rq"] Oct 07 12:56:12 crc kubenswrapper[5024]: I1007 12:56:12.781446 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151cc44b-5fbc-401a-81b1-b65ffa4b85b1" path="/var/lib/kubelet/pods/151cc44b-5fbc-401a-81b1-b65ffa4b85b1/volumes" Oct 07 12:56:12 crc kubenswrapper[5024]: I1007 12:56:12.782520 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2520e994-c7f7-4439-9c46-8398e1b55cf8" path="/var/lib/kubelet/pods/2520e994-c7f7-4439-9c46-8398e1b55cf8/volumes" Oct 07 12:56:12 crc kubenswrapper[5024]: I1007 12:56:12.783037 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35bee31-8ade-4f6d-b6fe-35d989d2e251" path="/var/lib/kubelet/pods/c35bee31-8ade-4f6d-b6fe-35d989d2e251/volumes" Oct 07 12:56:13 crc kubenswrapper[5024]: I1007 12:56:13.030912 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cvp9t"] Oct 07 12:56:13 crc kubenswrapper[5024]: I1007 12:56:13.041266 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cvp9t"] Oct 07 12:56:14 crc kubenswrapper[5024]: I1007 12:56:14.545683 5024 generic.go:334] "Generic (PLEG): container finished" podID="acfe274e-e517-4ccb-8539-3d3e0f87ad2b" containerID="aed39f75fbb9905306d4f1aeaa21a69f69e75ca142f47b21ec34d6326fb51ec2" exitCode=0 Oct 07 12:56:14 crc kubenswrapper[5024]: I1007 12:56:14.545736 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" event={"ID":"acfe274e-e517-4ccb-8539-3d3e0f87ad2b","Type":"ContainerDied","Data":"aed39f75fbb9905306d4f1aeaa21a69f69e75ca142f47b21ec34d6326fb51ec2"} Oct 07 12:56:14 crc kubenswrapper[5024]: I1007 12:56:14.764613 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b57d44-185e-4645-9078-4deb8da00531" path="/var/lib/kubelet/pods/36b57d44-185e-4645-9078-4deb8da00531/volumes" Oct 07 12:56:15 crc kubenswrapper[5024]: I1007 12:56:15.888218 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.046242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key\") pod \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.046302 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory\") pod \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.046421 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnpp7\" (UniqueName: \"kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7\") pod \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\" (UID: \"acfe274e-e517-4ccb-8539-3d3e0f87ad2b\") " Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.053408 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7" (OuterVolumeSpecName: "kube-api-access-qnpp7") pod "acfe274e-e517-4ccb-8539-3d3e0f87ad2b" (UID: "acfe274e-e517-4ccb-8539-3d3e0f87ad2b"). InnerVolumeSpecName "kube-api-access-qnpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.076020 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory" (OuterVolumeSpecName: "inventory") pod "acfe274e-e517-4ccb-8539-3d3e0f87ad2b" (UID: "acfe274e-e517-4ccb-8539-3d3e0f87ad2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.078309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acfe274e-e517-4ccb-8539-3d3e0f87ad2b" (UID: "acfe274e-e517-4ccb-8539-3d3e0f87ad2b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.148608 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.148637 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.148647 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnpp7\" (UniqueName: \"kubernetes.io/projected/acfe274e-e517-4ccb-8539-3d3e0f87ad2b-kube-api-access-qnpp7\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.563201 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" event={"ID":"acfe274e-e517-4ccb-8539-3d3e0f87ad2b","Type":"ContainerDied","Data":"d29ee69cdf9d25a38f85d2cd92e7bcc749d835f15f78396448731f82d0f18aa9"} Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.563529 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29ee69cdf9d25a38f85d2cd92e7bcc749d835f15f78396448731f82d0f18aa9" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.563213 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.613884 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx"] Oct 07 12:56:16 crc kubenswrapper[5024]: E1007 12:56:16.614357 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfe274e-e517-4ccb-8539-3d3e0f87ad2b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.614381 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfe274e-e517-4ccb-8539-3d3e0f87ad2b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.614612 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfe274e-e517-4ccb-8539-3d3e0f87ad2b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.615427 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.617316 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.617329 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.617387 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.617514 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.621579 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx"] Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.758614 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.758759 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rvxv\" (UniqueName: \"kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.758840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.860432 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.860530 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.860613 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rvxv\" (UniqueName: \"kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.864116 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.864131 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.879755 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rvxv\" (UniqueName: \"kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:16 crc kubenswrapper[5024]: I1007 12:56:16.939018 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:56:17 crc kubenswrapper[5024]: I1007 12:56:17.044042 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8vb4z"] Oct 07 12:56:17 crc kubenswrapper[5024]: I1007 12:56:17.055266 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8vb4z"] Oct 07 12:56:17 crc kubenswrapper[5024]: I1007 12:56:17.458666 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx"] Oct 07 12:56:17 crc kubenswrapper[5024]: I1007 12:56:17.573823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" event={"ID":"1b7b80b0-5656-47b7-8da5-bd0b25255076","Type":"ContainerStarted","Data":"8b56eb52726910267e19996fc75f1fc7d741c05fc58b1d0f49cf00d017c043df"} Oct 07 12:56:18 crc kubenswrapper[5024]: I1007 12:56:18.584930 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" event={"ID":"1b7b80b0-5656-47b7-8da5-bd0b25255076","Type":"ContainerStarted","Data":"fe37dd83f62bd1503813ddf10757a9fe25b64ed0b5accf5bff46c2e0f23aaf5f"} Oct 07 12:56:18 crc kubenswrapper[5024]: I1007 12:56:18.613805 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" podStartSLOduration=2.121646592 podStartE2EDuration="2.613783241s" podCreationTimestamp="2025-10-07 12:56:16 +0000 UTC" firstStartedPulling="2025-10-07 12:56:17.460932622 +0000 UTC m=+1715.536719460" lastFinishedPulling="2025-10-07 12:56:17.953069271 +0000 UTC m=+1716.028856109" observedRunningTime="2025-10-07 12:56:18.607756644 +0000 UTC m=+1716.683543492" watchObservedRunningTime="2025-10-07 12:56:18.613783241 +0000 UTC m=+1716.689570079" Oct 07 12:56:18 crc kubenswrapper[5024]: I1007 12:56:18.767380 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddaa23e-2e38-4835-a311-69a6e7ef3c16" path="/var/lib/kubelet/pods/dddaa23e-2e38-4835-a311-69a6e7ef3c16/volumes" Oct 07 12:56:21 crc kubenswrapper[5024]: I1007 12:56:21.036384 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9nmmq"] Oct 07 12:56:21 crc kubenswrapper[5024]: I1007 12:56:21.044254 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9nmmq"] Oct 07 12:56:22 crc kubenswrapper[5024]: I1007 12:56:22.760804 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c25bea-71ea-4c21-9331-19b58c0fdd89" path="/var/lib/kubelet/pods/b4c25bea-71ea-4c21-9331-19b58c0fdd89/volumes" Oct 07 12:56:23 crc kubenswrapper[5024]: I1007 12:56:23.752297 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:56:23 crc kubenswrapper[5024]: E1007 12:56:23.752547 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.817796 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.821837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.827298 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.961706 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.962120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:32 crc kubenswrapper[5024]: I1007 12:56:32.962273 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxf7\" (UniqueName: \"kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.064482 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chxf7\" (UniqueName: \"kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.064652 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.064679 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.065358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.065534 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.083997 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxf7\" (UniqueName: \"kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7\") pod \"community-operators-crdgz\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.180833 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.699919 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:33 crc kubenswrapper[5024]: W1007 12:56:33.701607 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b11798_29d2_41af_b75d_2eace28114e0.slice/crio-31d936c36db670d2610a7ecd2a73fdf0656dc516434193a3f7a9a2489d5d7ad5 WatchSource:0}: Error finding container 31d936c36db670d2610a7ecd2a73fdf0656dc516434193a3f7a9a2489d5d7ad5: Status 404 returned error can't find the container with id 31d936c36db670d2610a7ecd2a73fdf0656dc516434193a3f7a9a2489d5d7ad5 Oct 07 12:56:33 crc kubenswrapper[5024]: I1007 12:56:33.729178 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerStarted","Data":"31d936c36db670d2610a7ecd2a73fdf0656dc516434193a3f7a9a2489d5d7ad5"} Oct 07 12:56:34 crc kubenswrapper[5024]: I1007 12:56:34.739161 5024 generic.go:334] "Generic (PLEG): container finished" podID="71b11798-29d2-41af-b75d-2eace28114e0" containerID="eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4" exitCode=0 Oct 07 12:56:34 crc kubenswrapper[5024]: I1007 12:56:34.739231 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerDied","Data":"eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4"} Oct 07 12:56:35 crc kubenswrapper[5024]: I1007 12:56:35.749402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerStarted","Data":"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6"} Oct 07 12:56:35 crc kubenswrapper[5024]: I1007 12:56:35.752191 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:56:35 crc kubenswrapper[5024]: E1007 12:56:35.752422 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:56:36 crc kubenswrapper[5024]: I1007 12:56:36.761077 5024 generic.go:334] "Generic (PLEG): container finished" podID="71b11798-29d2-41af-b75d-2eace28114e0" containerID="d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6" exitCode=0 Oct 07 12:56:36 crc kubenswrapper[5024]: I1007 12:56:36.765528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerDied","Data":"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6"} Oct 07 12:56:37 crc kubenswrapper[5024]: I1007 12:56:37.772113 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerStarted","Data":"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962"} Oct 07 12:56:37 crc kubenswrapper[5024]: I1007 12:56:37.793353 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-crdgz" podStartSLOduration=3.316880939 podStartE2EDuration="5.793335273s" podCreationTimestamp="2025-10-07 12:56:32 +0000 UTC" firstStartedPulling="2025-10-07 12:56:34.741056112 +0000 UTC m=+1732.816842950" lastFinishedPulling="2025-10-07 12:56:37.217510446 +0000 UTC m=+1735.293297284" observedRunningTime="2025-10-07 12:56:37.789191342 +0000 UTC m=+1735.864978200" watchObservedRunningTime="2025-10-07 12:56:37.793335273 +0000 UTC m=+1735.869122111" Oct 07 12:56:40 crc kubenswrapper[5024]: I1007 12:56:40.038290 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gtjxh"] Oct 07 12:56:40 crc kubenswrapper[5024]: I1007 12:56:40.049810 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gtjxh"] Oct 07 12:56:40 crc kubenswrapper[5024]: I1007 12:56:40.762956 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a60ce0-997e-4a92-9ed2-8326cd95d4a3" path="/var/lib/kubelet/pods/b1a60ce0-997e-4a92-9ed2-8326cd95d4a3/volumes" Oct 07 12:56:43 crc kubenswrapper[5024]: I1007 12:56:43.181576 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:43 crc kubenswrapper[5024]: I1007 12:56:43.181907 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:43 crc kubenswrapper[5024]: I1007 12:56:43.222211 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:43 crc kubenswrapper[5024]: I1007 12:56:43.873626 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:43 crc kubenswrapper[5024]: I1007 12:56:43.921050 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.695349 5024 scope.go:117] "RemoveContainer" containerID="4dadeaecc87c1adc8f22f54a36e0c09d498e580e3a58b5502e09efa84f8e7f12" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.731600 5024 scope.go:117] "RemoveContainer" containerID="cd977951a2da68440677fe4ce3ddec1236709987b09d6ccd5b9d08f1f7007fca" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.786942 5024 scope.go:117] "RemoveContainer" containerID="6e10551055ee44a3d93397257dbecad0300326da14a1064b1369159915e9f866" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.815016 5024 scope.go:117] "RemoveContainer" containerID="c8bb8b6c3282f6fb85e0525e2327bf96b3267d635d8e8cf466c9ce6e2a5e7ba4" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.848260 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-crdgz" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="registry-server" containerID="cri-o://86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962" gracePeriod=2 Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.887237 5024 scope.go:117] "RemoveContainer" containerID="9640940a21c8d739424e837add2332f9e9ed19c44468aa67fbd0522f3382b5be" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.949714 5024 scope.go:117] "RemoveContainer" containerID="1b5f3f879ee298b0e215916b1924fedfb6651c8ff1da9920346699e5acd0defc" Oct 07 12:56:45 crc kubenswrapper[5024]: I1007 12:56:45.990372 5024 scope.go:117] "RemoveContainer" containerID="db97de51cefbb6f8fec0daf7081808cce4d7c30ab6bf39d2459db02b7c29c4cd" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.027381 5024 scope.go:117] "RemoveContainer" containerID="2d242175b162f1346a0e2e34906a9f49ca7bea5f79251df561f25100b0b0baa8" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.062981 5024 scope.go:117] "RemoveContainer" containerID="00554f87e0cc42c2324f8557cca0b3338cfe6514266dac6c4e6588726c7fc4d5" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.086276 5024 scope.go:117] "RemoveContainer" containerID="e475246913a93bc814397b24b01b47361d58cb4c482c0a8636b160bbbf029756" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.120845 5024 scope.go:117] "RemoveContainer" containerID="d8928a2ab208f1c85e289020d0f03678bb13dfa7df8cc55901b893db2cb45c31" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.204968 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.299725 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chxf7\" (UniqueName: \"kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7\") pod \"71b11798-29d2-41af-b75d-2eace28114e0\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.299855 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content\") pod \"71b11798-29d2-41af-b75d-2eace28114e0\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.299903 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities\") pod \"71b11798-29d2-41af-b75d-2eace28114e0\" (UID: \"71b11798-29d2-41af-b75d-2eace28114e0\") " Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.301039 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities" (OuterVolumeSpecName: "utilities") pod "71b11798-29d2-41af-b75d-2eace28114e0" (UID: "71b11798-29d2-41af-b75d-2eace28114e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.305950 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7" (OuterVolumeSpecName: "kube-api-access-chxf7") pod "71b11798-29d2-41af-b75d-2eace28114e0" (UID: "71b11798-29d2-41af-b75d-2eace28114e0"). InnerVolumeSpecName "kube-api-access-chxf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.352889 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71b11798-29d2-41af-b75d-2eace28114e0" (UID: "71b11798-29d2-41af-b75d-2eace28114e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.402211 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.402238 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b11798-29d2-41af-b75d-2eace28114e0-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.402249 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chxf7\" (UniqueName: \"kubernetes.io/projected/71b11798-29d2-41af-b75d-2eace28114e0-kube-api-access-chxf7\") on node \"crc\" DevicePath \"\"" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.857289 5024 generic.go:334] "Generic (PLEG): container finished" podID="71b11798-29d2-41af-b75d-2eace28114e0" containerID="86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962" exitCode=0 Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.857355 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crdgz" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.857376 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerDied","Data":"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962"} Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.857730 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crdgz" event={"ID":"71b11798-29d2-41af-b75d-2eace28114e0","Type":"ContainerDied","Data":"31d936c36db670d2610a7ecd2a73fdf0656dc516434193a3f7a9a2489d5d7ad5"} Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.857765 5024 scope.go:117] "RemoveContainer" containerID="86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.879788 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.884450 5024 scope.go:117] "RemoveContainer" containerID="d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.910252 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-crdgz"] Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.912021 5024 scope.go:117] "RemoveContainer" containerID="eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.959354 5024 scope.go:117] "RemoveContainer" containerID="86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962" Oct 07 12:56:46 crc kubenswrapper[5024]: E1007 12:56:46.959720 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962\": container with ID starting with 86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962 not found: ID does not exist" containerID="86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.959762 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962"} err="failed to get container status \"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962\": rpc error: code = NotFound desc = could not find container \"86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962\": container with ID starting with 86a80d461bb194afc9d6508de5773bbd404f5320886705cbcd4a690ad041c962 not found: ID does not exist" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.959786 5024 scope.go:117] "RemoveContainer" containerID="d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6" Oct 07 12:56:46 crc kubenswrapper[5024]: E1007 12:56:46.960080 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6\": container with ID starting with d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6 not found: ID does not exist" containerID="d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.960104 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6"} err="failed to get container status \"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6\": rpc error: code = NotFound desc = could not find container \"d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6\": container with ID starting with d1850428978d27e0a6f26d55bf5cc26025605158f3d45da45ffb2581daafa6a6 not found: ID does not exist" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.960121 5024 scope.go:117] "RemoveContainer" containerID="eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4" Oct 07 12:56:46 crc kubenswrapper[5024]: E1007 12:56:46.960389 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4\": container with ID starting with eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4 not found: ID does not exist" containerID="eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4" Oct 07 12:56:46 crc kubenswrapper[5024]: I1007 12:56:46.960414 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4"} err="failed to get container status \"eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4\": rpc error: code = NotFound desc = could not find container \"eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4\": container with ID starting with eea6a9e9c314084041d3a2285436becfcc7bcd766da5cb8afe9db09d5b91a7d4 not found: ID does not exist" Oct 07 12:56:47 crc kubenswrapper[5024]: I1007 12:56:47.030763 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s85hg"] Oct 07 12:56:47 crc kubenswrapper[5024]: I1007 12:56:47.037884 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s85hg"] Oct 07 12:56:48 crc kubenswrapper[5024]: I1007 12:56:48.763555 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b11798-29d2-41af-b75d-2eace28114e0" path="/var/lib/kubelet/pods/71b11798-29d2-41af-b75d-2eace28114e0/volumes" Oct 07 12:56:48 crc kubenswrapper[5024]: I1007 12:56:48.764536 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f43bf9-8914-4def-a454-a4e5bd3d843b" path="/var/lib/kubelet/pods/a2f43bf9-8914-4def-a454-a4e5bd3d843b/volumes" Oct 07 12:56:49 crc kubenswrapper[5024]: I1007 12:56:49.751989 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:56:49 crc kubenswrapper[5024]: E1007 12:56:49.752285 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:57:02 crc kubenswrapper[5024]: I1007 12:57:02.756481 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:57:02 crc kubenswrapper[5024]: E1007 12:57:02.757412 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:57:08 crc kubenswrapper[5024]: I1007 12:57:08.059263 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pzbf6"] Oct 07 12:57:08 crc kubenswrapper[5024]: I1007 12:57:08.070567 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pzbf6"] Oct 07 12:57:08 crc kubenswrapper[5024]: I1007 12:57:08.763666 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac3ad2e-791b-4133-8417-61c5465da6ea" path="/var/lib/kubelet/pods/2ac3ad2e-791b-4133-8417-61c5465da6ea/volumes" Oct 07 12:57:11 crc kubenswrapper[5024]: I1007 12:57:11.026982 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sh44r"] Oct 07 12:57:11 crc kubenswrapper[5024]: I1007 12:57:11.039453 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sh44r"] Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.024443 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kqnsq"] Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.039500 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kqnsq"] Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.046538 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-z29vv"] Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.053287 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-z29vv"] Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.765049 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346dd65c-ac64-4c3b-b9d0-6f4d16e19f51" path="/var/lib/kubelet/pods/346dd65c-ac64-4c3b-b9d0-6f4d16e19f51/volumes" Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.765841 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65244b6b-8bbf-47a0-b0e8-568a6fdba17d" path="/var/lib/kubelet/pods/65244b6b-8bbf-47a0-b0e8-568a6fdba17d/volumes" Oct 07 12:57:12 crc kubenswrapper[5024]: I1007 12:57:12.766339 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a971f3af-a1cf-4423-b776-bb662957878c" path="/var/lib/kubelet/pods/a971f3af-a1cf-4423-b776-bb662957878c/volumes" Oct 07 12:57:13 crc kubenswrapper[5024]: I1007 12:57:13.076039 5024 generic.go:334] "Generic (PLEG): container finished" podID="1b7b80b0-5656-47b7-8da5-bd0b25255076" containerID="fe37dd83f62bd1503813ddf10757a9fe25b64ed0b5accf5bff46c2e0f23aaf5f" exitCode=2 Oct 07 12:57:13 crc kubenswrapper[5024]: I1007 12:57:13.076096 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" event={"ID":"1b7b80b0-5656-47b7-8da5-bd0b25255076","Type":"ContainerDied","Data":"fe37dd83f62bd1503813ddf10757a9fe25b64ed0b5accf5bff46c2e0f23aaf5f"} Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.558383 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.639494 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory\") pod \"1b7b80b0-5656-47b7-8da5-bd0b25255076\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.639708 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rvxv\" (UniqueName: \"kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv\") pod \"1b7b80b0-5656-47b7-8da5-bd0b25255076\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.639854 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key\") pod \"1b7b80b0-5656-47b7-8da5-bd0b25255076\" (UID: \"1b7b80b0-5656-47b7-8da5-bd0b25255076\") " Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.647374 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv" (OuterVolumeSpecName: "kube-api-access-7rvxv") pod "1b7b80b0-5656-47b7-8da5-bd0b25255076" (UID: "1b7b80b0-5656-47b7-8da5-bd0b25255076"). InnerVolumeSpecName "kube-api-access-7rvxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.667333 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory" (OuterVolumeSpecName: "inventory") pod "1b7b80b0-5656-47b7-8da5-bd0b25255076" (UID: "1b7b80b0-5656-47b7-8da5-bd0b25255076"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.705660 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1b7b80b0-5656-47b7-8da5-bd0b25255076" (UID: "1b7b80b0-5656-47b7-8da5-bd0b25255076"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.742129 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rvxv\" (UniqueName: \"kubernetes.io/projected/1b7b80b0-5656-47b7-8da5-bd0b25255076-kube-api-access-7rvxv\") on node \"crc\" DevicePath \"\"" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.742170 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:57:14 crc kubenswrapper[5024]: I1007 12:57:14.742180 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b7b80b0-5656-47b7-8da5-bd0b25255076-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:57:15 crc kubenswrapper[5024]: I1007 12:57:15.098340 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" event={"ID":"1b7b80b0-5656-47b7-8da5-bd0b25255076","Type":"ContainerDied","Data":"8b56eb52726910267e19996fc75f1fc7d741c05fc58b1d0f49cf00d017c043df"} Oct 07 12:57:15 crc kubenswrapper[5024]: I1007 12:57:15.098411 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b56eb52726910267e19996fc75f1fc7d741c05fc58b1d0f49cf00d017c043df" Oct 07 12:57:15 crc kubenswrapper[5024]: I1007 12:57:15.098412 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx" Oct 07 12:57:16 crc kubenswrapper[5024]: I1007 12:57:16.752595 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:57:16 crc kubenswrapper[5024]: E1007 12:57:16.753416 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.028932 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85"] Oct 07 12:57:22 crc kubenswrapper[5024]: E1007 12:57:22.030204 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7b80b0-5656-47b7-8da5-bd0b25255076" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030224 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7b80b0-5656-47b7-8da5-bd0b25255076" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:57:22 crc kubenswrapper[5024]: E1007 12:57:22.030242 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="registry-server" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030250 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="registry-server" Oct 07 12:57:22 crc kubenswrapper[5024]: E1007 12:57:22.030272 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="extract-content" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030279 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="extract-content" Oct 07 12:57:22 crc kubenswrapper[5024]: E1007 12:57:22.030295 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="extract-utilities" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030302 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="extract-utilities" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030509 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7b80b0-5656-47b7-8da5-bd0b25255076" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.030529 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b11798-29d2-41af-b75d-2eace28114e0" containerName="registry-server" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.031222 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.033804 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.034267 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.034340 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.035665 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.053738 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85"] Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.181078 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.181186 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tvw\" (UniqueName: \"kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.181354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.282690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.282799 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.282856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85tvw\" (UniqueName: \"kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.289922 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.290051 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.303661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85tvw\" (UniqueName: \"kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ddw85\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.363119 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:57:22 crc kubenswrapper[5024]: I1007 12:57:22.920844 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85"] Oct 07 12:57:23 crc kubenswrapper[5024]: I1007 12:57:23.166779 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" event={"ID":"2f3929fb-37ea-4655-a429-d1d4019751f4","Type":"ContainerStarted","Data":"1f16eacd4eb8e4fc890ea85a62ae67eb2162edaa5ec2f732033c93ca7dd5b226"} Oct 07 12:57:25 crc kubenswrapper[5024]: I1007 12:57:25.189400 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" event={"ID":"2f3929fb-37ea-4655-a429-d1d4019751f4","Type":"ContainerStarted","Data":"4d7687b728655f2261e9bd988dbc6d46994ce17b2453148006568bd76e0dd7bc"} Oct 07 12:57:25 crc kubenswrapper[5024]: I1007 12:57:25.222269 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" podStartSLOduration=1.774852018 podStartE2EDuration="3.222243744s" podCreationTimestamp="2025-10-07 12:57:22 +0000 UTC" firstStartedPulling="2025-10-07 12:57:22.91984509 +0000 UTC m=+1780.995631928" lastFinishedPulling="2025-10-07 12:57:24.367236816 +0000 UTC m=+1782.443023654" observedRunningTime="2025-10-07 12:57:25.210181815 +0000 UTC m=+1783.285968673" watchObservedRunningTime="2025-10-07 12:57:25.222243744 +0000 UTC m=+1783.298030592" Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.053714 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7c29-account-create-bmsrg"] Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.062983 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9395-account-create-cnnqv"] Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.072996 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7a97-account-create-jpmgr"] Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.081313 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7c29-account-create-bmsrg"] Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.088620 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9395-account-create-cnnqv"] Oct 07 12:57:29 crc kubenswrapper[5024]: I1007 12:57:29.095697 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7a97-account-create-jpmgr"] Oct 07 12:57:30 crc kubenswrapper[5024]: I1007 12:57:30.773061 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5e7b20-6b89-434d-b940-530830f73fcb" path="/var/lib/kubelet/pods/5d5e7b20-6b89-434d-b940-530830f73fcb/volumes" Oct 07 12:57:30 crc kubenswrapper[5024]: I1007 12:57:30.774951 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ceed30-6beb-456f-9ecd-9edf82215f20" path="/var/lib/kubelet/pods/78ceed30-6beb-456f-9ecd-9edf82215f20/volumes" Oct 07 12:57:30 crc kubenswrapper[5024]: I1007 12:57:30.776111 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12621fa-274f-4792-8110-c51afe64bed0" path="/var/lib/kubelet/pods/e12621fa-274f-4792-8110-c51afe64bed0/volumes" Oct 07 12:57:31 crc kubenswrapper[5024]: I1007 12:57:31.752281 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:57:31 crc kubenswrapper[5024]: E1007 12:57:31.752698 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.324444 5024 scope.go:117] "RemoveContainer" containerID="f4ba56bdde2c651f7b22446722196f54a3d7126f0f22a8b02c9fa9c8c254e56b" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.349750 5024 scope.go:117] "RemoveContainer" containerID="e595cd218e3b423dca4725103664a5c480a4ec6c08a3d61b0fcc126296575362" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.417905 5024 scope.go:117] "RemoveContainer" containerID="046c27e6f898b3cb5e122a7733d11884ec734fe9c4c7e4d7f954ed9f18a52f89" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.451407 5024 scope.go:117] "RemoveContainer" containerID="ba88178fe388c78a0ee60285f177f2731bf95c47c1ae10bfcb3a58536b32d8a6" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.487073 5024 scope.go:117] "RemoveContainer" containerID="040fdab56371916c47630541a4a5cf8d36cccbbc046ed9b0a9058603265d86b6" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.528358 5024 scope.go:117] "RemoveContainer" containerID="e929a9535d5df0fccc6684896091c6f844b7c2817d948f988a8fffdbebe55040" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.581506 5024 scope.go:117] "RemoveContainer" containerID="61da99d9a61e9a7845a12b3e29423ed7db3acb8777e2f305ca3315477f60d606" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.608806 5024 scope.go:117] "RemoveContainer" containerID="45f0aedce3f4d5704211792e1b7a7481ba4eb5f10caf8f32bb151ca1a115b9a3" Oct 07 12:57:46 crc kubenswrapper[5024]: I1007 12:57:46.751959 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:57:46 crc kubenswrapper[5024]: E1007 12:57:46.752240 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:57:52 crc kubenswrapper[5024]: I1007 12:57:52.043805 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fgvjl"] Oct 07 12:57:52 crc kubenswrapper[5024]: I1007 12:57:52.056665 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fgvjl"] Oct 07 12:57:52 crc kubenswrapper[5024]: I1007 12:57:52.785334 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2082746b-d351-4905-a3f6-320ff139b534" path="/var/lib/kubelet/pods/2082746b-d351-4905-a3f6-320ff139b534/volumes" Oct 07 12:57:58 crc kubenswrapper[5024]: I1007 12:57:58.751465 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:57:58 crc kubenswrapper[5024]: E1007 12:57:58.752234 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:58:09 crc kubenswrapper[5024]: I1007 12:58:09.620617 5024 generic.go:334] "Generic (PLEG): container finished" podID="2f3929fb-37ea-4655-a429-d1d4019751f4" containerID="4d7687b728655f2261e9bd988dbc6d46994ce17b2453148006568bd76e0dd7bc" exitCode=0 Oct 07 12:58:09 crc kubenswrapper[5024]: I1007 12:58:09.620708 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" event={"ID":"2f3929fb-37ea-4655-a429-d1d4019751f4","Type":"ContainerDied","Data":"4d7687b728655f2261e9bd988dbc6d46994ce17b2453148006568bd76e0dd7bc"} Oct 07 12:58:09 crc kubenswrapper[5024]: I1007 12:58:09.751983 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:58:09 crc kubenswrapper[5024]: E1007 12:58:09.752253 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.076761 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.165464 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key\") pod \"2f3929fb-37ea-4655-a429-d1d4019751f4\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.165518 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85tvw\" (UniqueName: \"kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw\") pod \"2f3929fb-37ea-4655-a429-d1d4019751f4\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.165778 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory\") pod \"2f3929fb-37ea-4655-a429-d1d4019751f4\" (UID: \"2f3929fb-37ea-4655-a429-d1d4019751f4\") " Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.172070 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw" (OuterVolumeSpecName: "kube-api-access-85tvw") pod "2f3929fb-37ea-4655-a429-d1d4019751f4" (UID: "2f3929fb-37ea-4655-a429-d1d4019751f4"). InnerVolumeSpecName "kube-api-access-85tvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.192382 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory" (OuterVolumeSpecName: "inventory") pod "2f3929fb-37ea-4655-a429-d1d4019751f4" (UID: "2f3929fb-37ea-4655-a429-d1d4019751f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.192667 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f3929fb-37ea-4655-a429-d1d4019751f4" (UID: "2f3929fb-37ea-4655-a429-d1d4019751f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.267908 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.267958 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f3929fb-37ea-4655-a429-d1d4019751f4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.267972 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85tvw\" (UniqueName: \"kubernetes.io/projected/2f3929fb-37ea-4655-a429-d1d4019751f4-kube-api-access-85tvw\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.649818 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" event={"ID":"2f3929fb-37ea-4655-a429-d1d4019751f4","Type":"ContainerDied","Data":"1f16eacd4eb8e4fc890ea85a62ae67eb2162edaa5ec2f732033c93ca7dd5b226"} Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.649856 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f16eacd4eb8e4fc890ea85a62ae67eb2162edaa5ec2f732033c93ca7dd5b226" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.649866 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.754411 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ptg46"] Oct 07 12:58:11 crc kubenswrapper[5024]: E1007 12:58:11.755330 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3929fb-37ea-4655-a429-d1d4019751f4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.755373 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3929fb-37ea-4655-a429-d1d4019751f4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.755765 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3929fb-37ea-4655-a429-d1d4019751f4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.756655 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.764657 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.765531 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.765876 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.766269 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.770355 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ptg46"] Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.884093 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.884160 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.884239 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxng\" (UniqueName: \"kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.985875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.985930 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.985981 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxng\" (UniqueName: \"kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.991160 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:11 crc kubenswrapper[5024]: I1007 12:58:11.993785 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:12 crc kubenswrapper[5024]: I1007 12:58:12.007734 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxng\" (UniqueName: \"kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng\") pod \"ssh-known-hosts-edpm-deployment-ptg46\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:12 crc kubenswrapper[5024]: I1007 12:58:12.085619 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:12 crc kubenswrapper[5024]: I1007 12:58:12.644194 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ptg46"] Oct 07 12:58:12 crc kubenswrapper[5024]: W1007 12:58:12.644323 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7afd213_475e_4426_913f_ec7b75850f3a.slice/crio-97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616 WatchSource:0}: Error finding container 97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616: Status 404 returned error can't find the container with id 97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616 Oct 07 12:58:12 crc kubenswrapper[5024]: I1007 12:58:12.664964 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" event={"ID":"f7afd213-475e-4426-913f-ec7b75850f3a","Type":"ContainerStarted","Data":"97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616"} Oct 07 12:58:13 crc kubenswrapper[5024]: I1007 12:58:13.672293 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" event={"ID":"f7afd213-475e-4426-913f-ec7b75850f3a","Type":"ContainerStarted","Data":"12c546a42f4258eb8ca8e356ca582d3cc42400dbe23238b1980b4e3bc856175b"} Oct 07 12:58:13 crc kubenswrapper[5024]: I1007 12:58:13.697931 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" podStartSLOduration=2.288814013 podStartE2EDuration="2.697906803s" podCreationTimestamp="2025-10-07 12:58:11 +0000 UTC" firstStartedPulling="2025-10-07 12:58:12.648351114 +0000 UTC m=+1830.724137952" lastFinishedPulling="2025-10-07 12:58:13.057443904 +0000 UTC m=+1831.133230742" observedRunningTime="2025-10-07 12:58:13.690047245 +0000 UTC m=+1831.765834083" watchObservedRunningTime="2025-10-07 12:58:13.697906803 +0000 UTC m=+1831.773693641" Oct 07 12:58:14 crc kubenswrapper[5024]: I1007 12:58:14.072648 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd6lw"] Oct 07 12:58:14 crc kubenswrapper[5024]: I1007 12:58:14.085552 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd6lw"] Oct 07 12:58:14 crc kubenswrapper[5024]: I1007 12:58:14.761081 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f46fbe-963b-4d59-b9a8-3d02e31157a3" path="/var/lib/kubelet/pods/17f46fbe-963b-4d59-b9a8-3d02e31157a3/volumes" Oct 07 12:58:16 crc kubenswrapper[5024]: I1007 12:58:16.034378 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bm8ht"] Oct 07 12:58:16 crc kubenswrapper[5024]: I1007 12:58:16.041076 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bm8ht"] Oct 07 12:58:16 crc kubenswrapper[5024]: I1007 12:58:16.763180 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca249de4-0737-4c42-b2e9-78a0abf2bf94" path="/var/lib/kubelet/pods/ca249de4-0737-4c42-b2e9-78a0abf2bf94/volumes" Oct 07 12:58:20 crc kubenswrapper[5024]: I1007 12:58:20.728673 5024 generic.go:334] "Generic (PLEG): container finished" podID="f7afd213-475e-4426-913f-ec7b75850f3a" containerID="12c546a42f4258eb8ca8e356ca582d3cc42400dbe23238b1980b4e3bc856175b" exitCode=0 Oct 07 12:58:20 crc kubenswrapper[5024]: I1007 12:58:20.728758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" event={"ID":"f7afd213-475e-4426-913f-ec7b75850f3a","Type":"ContainerDied","Data":"12c546a42f4258eb8ca8e356ca582d3cc42400dbe23238b1980b4e3bc856175b"} Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.158561 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.260625 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0\") pod \"f7afd213-475e-4426-913f-ec7b75850f3a\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.260705 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxng\" (UniqueName: \"kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng\") pod \"f7afd213-475e-4426-913f-ec7b75850f3a\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.261019 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam\") pod \"f7afd213-475e-4426-913f-ec7b75850f3a\" (UID: \"f7afd213-475e-4426-913f-ec7b75850f3a\") " Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.267530 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng" (OuterVolumeSpecName: "kube-api-access-lzxng") pod "f7afd213-475e-4426-913f-ec7b75850f3a" (UID: "f7afd213-475e-4426-913f-ec7b75850f3a"). InnerVolumeSpecName "kube-api-access-lzxng". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.286658 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7afd213-475e-4426-913f-ec7b75850f3a" (UID: "f7afd213-475e-4426-913f-ec7b75850f3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.287001 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f7afd213-475e-4426-913f-ec7b75850f3a" (UID: "f7afd213-475e-4426-913f-ec7b75850f3a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.365827 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.365904 5024 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7afd213-475e-4426-913f-ec7b75850f3a-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.365974 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxng\" (UniqueName: \"kubernetes.io/projected/f7afd213-475e-4426-913f-ec7b75850f3a-kube-api-access-lzxng\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.745701 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" event={"ID":"f7afd213-475e-4426-913f-ec7b75850f3a","Type":"ContainerDied","Data":"97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616"} Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.745744 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b3b025baf606a185dea2d535c7c7439d7ecee772a6480c0276d6006e271616" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.745802 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ptg46" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.759199 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:58:22 crc kubenswrapper[5024]: E1007 12:58:22.759531 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.810549 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8"] Oct 07 12:58:22 crc kubenswrapper[5024]: E1007 12:58:22.811457 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7afd213-475e-4426-913f-ec7b75850f3a" containerName="ssh-known-hosts-edpm-deployment" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.811486 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7afd213-475e-4426-913f-ec7b75850f3a" containerName="ssh-known-hosts-edpm-deployment" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.811693 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7afd213-475e-4426-913f-ec7b75850f3a" containerName="ssh-known-hosts-edpm-deployment" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.812468 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.814405 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.814803 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.815000 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.816759 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.821472 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8"] Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.976053 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wclv\" (UniqueName: \"kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.976450 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:22 crc kubenswrapper[5024]: I1007 12:58:22.976483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.078030 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.078083 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.078210 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wclv\" (UniqueName: \"kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.083632 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.087011 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.095860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wclv\" (UniqueName: \"kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f52k8\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.155600 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.679007 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8"] Oct 07 12:58:23 crc kubenswrapper[5024]: W1007 12:58:23.684522 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc90ef245_e526_4462_aed6_43807ec3951f.slice/crio-b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262 WatchSource:0}: Error finding container b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262: Status 404 returned error can't find the container with id b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262 Oct 07 12:58:23 crc kubenswrapper[5024]: I1007 12:58:23.764971 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" event={"ID":"c90ef245-e526-4462-aed6-43807ec3951f","Type":"ContainerStarted","Data":"b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262"} Oct 07 12:58:24 crc kubenswrapper[5024]: I1007 12:58:24.779237 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" event={"ID":"c90ef245-e526-4462-aed6-43807ec3951f","Type":"ContainerStarted","Data":"a3637981e0ebcde4a3b77a55c99f1e086bfe84e30c0ec6da61b1c6ca71b1fa57"} Oct 07 12:58:24 crc kubenswrapper[5024]: I1007 12:58:24.806687 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" podStartSLOduration=2.229981017 podStartE2EDuration="2.806665539s" podCreationTimestamp="2025-10-07 12:58:22 +0000 UTC" firstStartedPulling="2025-10-07 12:58:23.686464115 +0000 UTC m=+1841.762250953" lastFinishedPulling="2025-10-07 12:58:24.263148637 +0000 UTC m=+1842.338935475" observedRunningTime="2025-10-07 12:58:24.804335692 +0000 UTC m=+1842.880122530" watchObservedRunningTime="2025-10-07 12:58:24.806665539 +0000 UTC m=+1842.882452387" Oct 07 12:58:32 crc kubenswrapper[5024]: I1007 12:58:32.879419 5024 generic.go:334] "Generic (PLEG): container finished" podID="c90ef245-e526-4462-aed6-43807ec3951f" containerID="a3637981e0ebcde4a3b77a55c99f1e086bfe84e30c0ec6da61b1c6ca71b1fa57" exitCode=0 Oct 07 12:58:32 crc kubenswrapper[5024]: I1007 12:58:32.879509 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" event={"ID":"c90ef245-e526-4462-aed6-43807ec3951f","Type":"ContainerDied","Data":"a3637981e0ebcde4a3b77a55c99f1e086bfe84e30c0ec6da61b1c6ca71b1fa57"} Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.297979 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.494290 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key\") pod \"c90ef245-e526-4462-aed6-43807ec3951f\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.494373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wclv\" (UniqueName: \"kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv\") pod \"c90ef245-e526-4462-aed6-43807ec3951f\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.494488 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory\") pod \"c90ef245-e526-4462-aed6-43807ec3951f\" (UID: \"c90ef245-e526-4462-aed6-43807ec3951f\") " Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.500459 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv" (OuterVolumeSpecName: "kube-api-access-9wclv") pod "c90ef245-e526-4462-aed6-43807ec3951f" (UID: "c90ef245-e526-4462-aed6-43807ec3951f"). InnerVolumeSpecName "kube-api-access-9wclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.526942 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c90ef245-e526-4462-aed6-43807ec3951f" (UID: "c90ef245-e526-4462-aed6-43807ec3951f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.529018 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory" (OuterVolumeSpecName: "inventory") pod "c90ef245-e526-4462-aed6-43807ec3951f" (UID: "c90ef245-e526-4462-aed6-43807ec3951f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.597061 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.597107 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wclv\" (UniqueName: \"kubernetes.io/projected/c90ef245-e526-4462-aed6-43807ec3951f-kube-api-access-9wclv\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.597119 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c90ef245-e526-4462-aed6-43807ec3951f-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.896397 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" event={"ID":"c90ef245-e526-4462-aed6-43807ec3951f","Type":"ContainerDied","Data":"b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262"} Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.896467 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45add47b5cf9fa4df66ec576fc02227dd37b706f1a63a19dd9bca66b563a262" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.896488 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.987701 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7"] Oct 07 12:58:34 crc kubenswrapper[5024]: E1007 12:58:34.988121 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90ef245-e526-4462-aed6-43807ec3951f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.988161 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90ef245-e526-4462-aed6-43807ec3951f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.988469 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90ef245-e526-4462-aed6-43807ec3951f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.989207 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.993872 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.993918 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.994224 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.995084 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 12:58:34 crc kubenswrapper[5024]: I1007 12:58:34.995195 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7"] Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.003855 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddr2\" (UniqueName: \"kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.003957 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.004008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.105453 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddr2\" (UniqueName: \"kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.105527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.105559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.110496 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.110517 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.127336 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddr2\" (UniqueName: \"kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.303767 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.752010 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:58:35 crc kubenswrapper[5024]: E1007 12:58:35.752445 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.855879 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7"] Oct 07 12:58:35 crc kubenswrapper[5024]: I1007 12:58:35.906488 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" event={"ID":"e4288bca-eb56-49e6-bf27-f5badea28e48","Type":"ContainerStarted","Data":"c40bdb255fd362d2061951892c74a3b3ff38585c99a653f40e24681a21bf5d59"} Oct 07 12:58:37 crc kubenswrapper[5024]: I1007 12:58:37.921432 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" event={"ID":"e4288bca-eb56-49e6-bf27-f5badea28e48","Type":"ContainerStarted","Data":"37d557787b580eebddbc6196abeff61a982715b60d5cd8ef6aee196067c7702e"} Oct 07 12:58:37 crc kubenswrapper[5024]: I1007 12:58:37.940374 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" podStartSLOduration=3.173375398 podStartE2EDuration="3.940355648s" podCreationTimestamp="2025-10-07 12:58:34 +0000 UTC" firstStartedPulling="2025-10-07 12:58:35.866596783 +0000 UTC m=+1853.942383621" lastFinishedPulling="2025-10-07 12:58:36.633577033 +0000 UTC m=+1854.709363871" observedRunningTime="2025-10-07 12:58:37.935831377 +0000 UTC m=+1856.011618235" watchObservedRunningTime="2025-10-07 12:58:37.940355648 +0000 UTC m=+1856.016142486" Oct 07 12:58:46 crc kubenswrapper[5024]: I1007 12:58:46.802065 5024 scope.go:117] "RemoveContainer" containerID="7e6fcbe2297b522c45d781a8673b5cc18a2f604eb779dc6aa11fc2debb78d89b" Oct 07 12:58:46 crc kubenswrapper[5024]: I1007 12:58:46.841311 5024 scope.go:117] "RemoveContainer" containerID="fa224f5e4caf5ba0204ee07c2121109c57c0d0079b3f8a8498713594104f1025" Oct 07 12:58:46 crc kubenswrapper[5024]: I1007 12:58:46.897095 5024 scope.go:117] "RemoveContainer" containerID="5559e9e74501b9091eee3ee4e0089c64fe74a7fb253cb68b4fa85a7f0dbce80a" Oct 07 12:58:47 crc kubenswrapper[5024]: I1007 12:58:47.009941 5024 generic.go:334] "Generic (PLEG): container finished" podID="e4288bca-eb56-49e6-bf27-f5badea28e48" containerID="37d557787b580eebddbc6196abeff61a982715b60d5cd8ef6aee196067c7702e" exitCode=0 Oct 07 12:58:47 crc kubenswrapper[5024]: I1007 12:58:47.009998 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" event={"ID":"e4288bca-eb56-49e6-bf27-f5badea28e48","Type":"ContainerDied","Data":"37d557787b580eebddbc6196abeff61a982715b60d5cd8ef6aee196067c7702e"} Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.509530 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.664288 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key\") pod \"e4288bca-eb56-49e6-bf27-f5badea28e48\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.664634 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddr2\" (UniqueName: \"kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2\") pod \"e4288bca-eb56-49e6-bf27-f5badea28e48\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.664889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory\") pod \"e4288bca-eb56-49e6-bf27-f5badea28e48\" (UID: \"e4288bca-eb56-49e6-bf27-f5badea28e48\") " Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.673826 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2" (OuterVolumeSpecName: "kube-api-access-pddr2") pod "e4288bca-eb56-49e6-bf27-f5badea28e48" (UID: "e4288bca-eb56-49e6-bf27-f5badea28e48"). InnerVolumeSpecName "kube-api-access-pddr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.698204 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory" (OuterVolumeSpecName: "inventory") pod "e4288bca-eb56-49e6-bf27-f5badea28e48" (UID: "e4288bca-eb56-49e6-bf27-f5badea28e48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.712041 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e4288bca-eb56-49e6-bf27-f5badea28e48" (UID: "e4288bca-eb56-49e6-bf27-f5badea28e48"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.766729 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.766761 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4288bca-eb56-49e6-bf27-f5badea28e48-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:48 crc kubenswrapper[5024]: I1007 12:58:48.766772 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pddr2\" (UniqueName: \"kubernetes.io/projected/e4288bca-eb56-49e6-bf27-f5badea28e48-kube-api-access-pddr2\") on node \"crc\" DevicePath \"\"" Oct 07 12:58:49 crc kubenswrapper[5024]: I1007 12:58:49.033029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" event={"ID":"e4288bca-eb56-49e6-bf27-f5badea28e48","Type":"ContainerDied","Data":"c40bdb255fd362d2061951892c74a3b3ff38585c99a653f40e24681a21bf5d59"} Oct 07 12:58:49 crc kubenswrapper[5024]: I1007 12:58:49.033086 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40bdb255fd362d2061951892c74a3b3ff38585c99a653f40e24681a21bf5d59" Oct 07 12:58:49 crc kubenswrapper[5024]: I1007 12:58:49.033157 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7" Oct 07 12:58:50 crc kubenswrapper[5024]: I1007 12:58:50.752878 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:58:50 crc kubenswrapper[5024]: E1007 12:58:50.753851 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:58:58 crc kubenswrapper[5024]: I1007 12:58:58.061102 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hdf5v"] Oct 07 12:58:58 crc kubenswrapper[5024]: I1007 12:58:58.067601 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hdf5v"] Oct 07 12:58:58 crc kubenswrapper[5024]: I1007 12:58:58.771979 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4130a45e-dbcf-40d2-bfe9-b353bff57d17" path="/var/lib/kubelet/pods/4130a45e-dbcf-40d2-bfe9-b353bff57d17/volumes" Oct 07 12:59:05 crc kubenswrapper[5024]: I1007 12:59:05.751891 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:59:05 crc kubenswrapper[5024]: E1007 12:59:05.753462 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:59:20 crc kubenswrapper[5024]: I1007 12:59:20.752246 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:59:20 crc kubenswrapper[5024]: E1007 12:59:20.753020 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:59:33 crc kubenswrapper[5024]: I1007 12:59:33.751718 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:59:33 crc kubenswrapper[5024]: E1007 12:59:33.752615 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:59:45 crc kubenswrapper[5024]: I1007 12:59:45.754031 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:59:45 crc kubenswrapper[5024]: E1007 12:59:45.754992 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 12:59:47 crc kubenswrapper[5024]: I1007 12:59:47.026122 5024 scope.go:117] "RemoveContainer" containerID="9568d1c4a3bc350b753ed7d626d97eb2758ab018d13dab0fdf9a2667368e3153" Oct 07 12:59:59 crc kubenswrapper[5024]: I1007 12:59:59.752047 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 12:59:59 crc kubenswrapper[5024]: E1007 12:59:59.753433 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.174321 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5"] Oct 07 13:00:00 crc kubenswrapper[5024]: E1007 13:00:00.174861 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4288bca-eb56-49e6-bf27-f5badea28e48" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.174875 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4288bca-eb56-49e6-bf27-f5badea28e48" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.175074 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4288bca-eb56-49e6-bf27-f5badea28e48" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.175795 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.180038 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.180275 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.184783 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5"] Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.301867 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sm9g\" (UniqueName: \"kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.301924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.301969 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.405790 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sm9g\" (UniqueName: \"kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.406404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.406446 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.408126 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.415384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.429861 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sm9g\" (UniqueName: \"kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g\") pod \"collect-profiles-29330700-88sd5\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:00 crc kubenswrapper[5024]: I1007 13:00:00.502031 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:01 crc kubenswrapper[5024]: I1007 13:00:01.001417 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5"] Oct 07 13:00:01 crc kubenswrapper[5024]: I1007 13:00:01.755890 5024 generic.go:334] "Generic (PLEG): container finished" podID="77ae6a17-23ee-4022-9fc7-1fca62434dcb" containerID="35b48f40aeb7183af53c9b160b3c07c55890aebcf8285e2f04198f1f2435c93a" exitCode=0 Oct 07 13:00:01 crc kubenswrapper[5024]: I1007 13:00:01.756050 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" event={"ID":"77ae6a17-23ee-4022-9fc7-1fca62434dcb","Type":"ContainerDied","Data":"35b48f40aeb7183af53c9b160b3c07c55890aebcf8285e2f04198f1f2435c93a"} Oct 07 13:00:01 crc kubenswrapper[5024]: I1007 13:00:01.756317 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" event={"ID":"77ae6a17-23ee-4022-9fc7-1fca62434dcb","Type":"ContainerStarted","Data":"2111005da4558cc87c59e6d6e8fc2af062d5c153b52c9dff3bff1a92791383e4"} Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.110358 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.283504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sm9g\" (UniqueName: \"kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g\") pod \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.283665 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume\") pod \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.283833 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume\") pod \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\" (UID: \"77ae6a17-23ee-4022-9fc7-1fca62434dcb\") " Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.284686 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume" (OuterVolumeSpecName: "config-volume") pod "77ae6a17-23ee-4022-9fc7-1fca62434dcb" (UID: "77ae6a17-23ee-4022-9fc7-1fca62434dcb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.284832 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77ae6a17-23ee-4022-9fc7-1fca62434dcb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.291811 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g" (OuterVolumeSpecName: "kube-api-access-7sm9g") pod "77ae6a17-23ee-4022-9fc7-1fca62434dcb" (UID: "77ae6a17-23ee-4022-9fc7-1fca62434dcb"). InnerVolumeSpecName "kube-api-access-7sm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.292809 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77ae6a17-23ee-4022-9fc7-1fca62434dcb" (UID: "77ae6a17-23ee-4022-9fc7-1fca62434dcb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.388064 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sm9g\" (UniqueName: \"kubernetes.io/projected/77ae6a17-23ee-4022-9fc7-1fca62434dcb-kube-api-access-7sm9g\") on node \"crc\" DevicePath \"\"" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.388119 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77ae6a17-23ee-4022-9fc7-1fca62434dcb-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.783696 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" event={"ID":"77ae6a17-23ee-4022-9fc7-1fca62434dcb","Type":"ContainerDied","Data":"2111005da4558cc87c59e6d6e8fc2af062d5c153b52c9dff3bff1a92791383e4"} Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.783748 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2111005da4558cc87c59e6d6e8fc2af062d5c153b52c9dff3bff1a92791383e4" Oct 07 13:00:03 crc kubenswrapper[5024]: I1007 13:00:03.783852 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5" Oct 07 13:00:13 crc kubenswrapper[5024]: I1007 13:00:13.752153 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 13:00:14 crc kubenswrapper[5024]: I1007 13:00:14.887365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2"} Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.160481 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29330701-bsr5z"] Oct 07 13:01:00 crc kubenswrapper[5024]: E1007 13:01:00.161751 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ae6a17-23ee-4022-9fc7-1fca62434dcb" containerName="collect-profiles" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.161768 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ae6a17-23ee-4022-9fc7-1fca62434dcb" containerName="collect-profiles" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.161948 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ae6a17-23ee-4022-9fc7-1fca62434dcb" containerName="collect-profiles" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.162779 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.181352 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29330701-bsr5z"] Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.259951 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.260019 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slwv\" (UniqueName: \"kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.260051 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.260149 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.361607 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.361695 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7slwv\" (UniqueName: \"kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.361798 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.362353 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.370384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.370470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.377850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.385287 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7slwv\" (UniqueName: \"kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv\") pod \"keystone-cron-29330701-bsr5z\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.494076 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:00 crc kubenswrapper[5024]: I1007 13:01:00.938870 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29330701-bsr5z"] Oct 07 13:01:01 crc kubenswrapper[5024]: I1007 13:01:01.333522 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330701-bsr5z" event={"ID":"872b9e07-c892-450b-bc31-786916818a09","Type":"ContainerStarted","Data":"c3066d80395a8ac585b8ea9f344c652765d8909369d6904717d9fc8177121974"} Oct 07 13:01:01 crc kubenswrapper[5024]: I1007 13:01:01.333906 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330701-bsr5z" event={"ID":"872b9e07-c892-450b-bc31-786916818a09","Type":"ContainerStarted","Data":"bab3d59618c541dcc948b6c20a56667bbd28dfa8bfa9693d1abd7b1065f732dd"} Oct 07 13:01:01 crc kubenswrapper[5024]: I1007 13:01:01.359774 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29330701-bsr5z" podStartSLOduration=1.35975318 podStartE2EDuration="1.35975318s" podCreationTimestamp="2025-10-07 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:01:01.353484699 +0000 UTC m=+1999.429271537" watchObservedRunningTime="2025-10-07 13:01:01.35975318 +0000 UTC m=+1999.435540028" Oct 07 13:01:03 crc kubenswrapper[5024]: I1007 13:01:03.352540 5024 generic.go:334] "Generic (PLEG): container finished" podID="872b9e07-c892-450b-bc31-786916818a09" containerID="c3066d80395a8ac585b8ea9f344c652765d8909369d6904717d9fc8177121974" exitCode=0 Oct 07 13:01:03 crc kubenswrapper[5024]: I1007 13:01:03.352579 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330701-bsr5z" event={"ID":"872b9e07-c892-450b-bc31-786916818a09","Type":"ContainerDied","Data":"c3066d80395a8ac585b8ea9f344c652765d8909369d6904717d9fc8177121974"} Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.694795 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.701074 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys\") pod \"872b9e07-c892-450b-bc31-786916818a09\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.701188 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data\") pod \"872b9e07-c892-450b-bc31-786916818a09\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.701263 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle\") pod \"872b9e07-c892-450b-bc31-786916818a09\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.701456 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7slwv\" (UniqueName: \"kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv\") pod \"872b9e07-c892-450b-bc31-786916818a09\" (UID: \"872b9e07-c892-450b-bc31-786916818a09\") " Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.707665 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "872b9e07-c892-450b-bc31-786916818a09" (UID: "872b9e07-c892-450b-bc31-786916818a09"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.707809 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv" (OuterVolumeSpecName: "kube-api-access-7slwv") pod "872b9e07-c892-450b-bc31-786916818a09" (UID: "872b9e07-c892-450b-bc31-786916818a09"). InnerVolumeSpecName "kube-api-access-7slwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.739838 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "872b9e07-c892-450b-bc31-786916818a09" (UID: "872b9e07-c892-450b-bc31-786916818a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.754319 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data" (OuterVolumeSpecName: "config-data") pod "872b9e07-c892-450b-bc31-786916818a09" (UID: "872b9e07-c892-450b-bc31-786916818a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.804147 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7slwv\" (UniqueName: \"kubernetes.io/projected/872b9e07-c892-450b-bc31-786916818a09-kube-api-access-7slwv\") on node \"crc\" DevicePath \"\"" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.804577 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.804587 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:01:04 crc kubenswrapper[5024]: I1007 13:01:04.804597 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872b9e07-c892-450b-bc31-786916818a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:01:05 crc kubenswrapper[5024]: I1007 13:01:05.376642 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330701-bsr5z" event={"ID":"872b9e07-c892-450b-bc31-786916818a09","Type":"ContainerDied","Data":"bab3d59618c541dcc948b6c20a56667bbd28dfa8bfa9693d1abd7b1065f732dd"} Oct 07 13:01:05 crc kubenswrapper[5024]: I1007 13:01:05.376698 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab3d59618c541dcc948b6c20a56667bbd28dfa8bfa9693d1abd7b1065f732dd" Oct 07 13:01:05 crc kubenswrapper[5024]: I1007 13:01:05.376747 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330701-bsr5z" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.187965 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:01:40 crc kubenswrapper[5024]: E1007 13:01:40.189016 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872b9e07-c892-450b-bc31-786916818a09" containerName="keystone-cron" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.189031 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="872b9e07-c892-450b-bc31-786916818a09" containerName="keystone-cron" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.189235 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="872b9e07-c892-450b-bc31-786916818a09" containerName="keystone-cron" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.192216 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.210752 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.255217 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.255571 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.255969 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxg56\" (UniqueName: \"kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.357908 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxg56\" (UniqueName: \"kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.358035 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.358056 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.358606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.358678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.377568 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxg56\" (UniqueName: \"kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56\") pod \"redhat-operators-kv2nn\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.533317 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:40 crc kubenswrapper[5024]: I1007 13:01:40.990977 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:01:41 crc kubenswrapper[5024]: I1007 13:01:41.720246 5024 generic.go:334] "Generic (PLEG): container finished" podID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerID="1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36" exitCode=0 Oct 07 13:01:41 crc kubenswrapper[5024]: I1007 13:01:41.720337 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerDied","Data":"1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36"} Oct 07 13:01:41 crc kubenswrapper[5024]: I1007 13:01:41.720736 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerStarted","Data":"fdba720e33557df6c0e6a6d5ef4550a57e83dbf18f226f8cb76668bb201b2e3f"} Oct 07 13:01:41 crc kubenswrapper[5024]: I1007 13:01:41.723859 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:01:43 crc kubenswrapper[5024]: I1007 13:01:43.750859 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerStarted","Data":"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384"} Oct 07 13:01:44 crc kubenswrapper[5024]: I1007 13:01:44.764317 5024 generic.go:334] "Generic (PLEG): container finished" podID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerID="e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384" exitCode=0 Oct 07 13:01:44 crc kubenswrapper[5024]: I1007 13:01:44.764391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerDied","Data":"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384"} Oct 07 13:01:46 crc kubenswrapper[5024]: I1007 13:01:46.790532 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerStarted","Data":"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc"} Oct 07 13:01:47 crc kubenswrapper[5024]: I1007 13:01:47.834065 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kv2nn" podStartSLOduration=4.087092601 podStartE2EDuration="7.834035821s" podCreationTimestamp="2025-10-07 13:01:40 +0000 UTC" firstStartedPulling="2025-10-07 13:01:41.723329078 +0000 UTC m=+2039.799115946" lastFinishedPulling="2025-10-07 13:01:45.470272318 +0000 UTC m=+2043.546059166" observedRunningTime="2025-10-07 13:01:47.827515783 +0000 UTC m=+2045.903302621" watchObservedRunningTime="2025-10-07 13:01:47.834035821 +0000 UTC m=+2045.909822659" Oct 07 13:01:50 crc kubenswrapper[5024]: I1007 13:01:50.534504 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:50 crc kubenswrapper[5024]: I1007 13:01:50.534970 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:01:51 crc kubenswrapper[5024]: I1007 13:01:51.591185 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kv2nn" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="registry-server" probeResult="failure" output=< Oct 07 13:01:51 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:01:51 crc kubenswrapper[5024]: > Oct 07 13:02:00 crc kubenswrapper[5024]: I1007 13:02:00.618905 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:02:00 crc kubenswrapper[5024]: I1007 13:02:00.686174 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:02:00 crc kubenswrapper[5024]: I1007 13:02:00.873809 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:02:01 crc kubenswrapper[5024]: I1007 13:02:01.950323 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kv2nn" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="registry-server" containerID="cri-o://4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc" gracePeriod=2 Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.430982 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.567831 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities\") pod \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.568095 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content\") pod \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.568439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxg56\" (UniqueName: \"kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56\") pod \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\" (UID: \"4fadc5a8-2801-4081-bc37-02e3f25bd9d8\") " Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.569466 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities" (OuterVolumeSpecName: "utilities") pod "4fadc5a8-2801-4081-bc37-02e3f25bd9d8" (UID: "4fadc5a8-2801-4081-bc37-02e3f25bd9d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.576469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56" (OuterVolumeSpecName: "kube-api-access-rxg56") pod "4fadc5a8-2801-4081-bc37-02e3f25bd9d8" (UID: "4fadc5a8-2801-4081-bc37-02e3f25bd9d8"). InnerVolumeSpecName "kube-api-access-rxg56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.671195 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxg56\" (UniqueName: \"kubernetes.io/projected/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-kube-api-access-rxg56\") on node \"crc\" DevicePath \"\"" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.671666 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.682217 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fadc5a8-2801-4081-bc37-02e3f25bd9d8" (UID: "4fadc5a8-2801-4081-bc37-02e3f25bd9d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.780571 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fadc5a8-2801-4081-bc37-02e3f25bd9d8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.972205 5024 generic.go:334] "Generic (PLEG): container finished" podID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerID="4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc" exitCode=0 Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.972274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerDied","Data":"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc"} Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.972317 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv2nn" event={"ID":"4fadc5a8-2801-4081-bc37-02e3f25bd9d8","Type":"ContainerDied","Data":"fdba720e33557df6c0e6a6d5ef4550a57e83dbf18f226f8cb76668bb201b2e3f"} Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.972339 5024 scope.go:117] "RemoveContainer" containerID="4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc" Oct 07 13:02:02 crc kubenswrapper[5024]: I1007 13:02:02.972362 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv2nn" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.048955 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.057249 5024 scope.go:117] "RemoveContainer" containerID="e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.078869 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kv2nn"] Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.093513 5024 scope.go:117] "RemoveContainer" containerID="1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.132847 5024 scope.go:117] "RemoveContainer" containerID="4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc" Oct 07 13:02:03 crc kubenswrapper[5024]: E1007 13:02:03.133458 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc\": container with ID starting with 4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc not found: ID does not exist" containerID="4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.133541 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc"} err="failed to get container status \"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc\": rpc error: code = NotFound desc = could not find container \"4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc\": container with ID starting with 4869e3370a86149c6155938cf9fdbbad7400fda13f612cc4377d7d980fb793bc not found: ID does not exist" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.133580 5024 scope.go:117] "RemoveContainer" containerID="e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384" Oct 07 13:02:03 crc kubenswrapper[5024]: E1007 13:02:03.134079 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384\": container with ID starting with e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384 not found: ID does not exist" containerID="e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.134129 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384"} err="failed to get container status \"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384\": rpc error: code = NotFound desc = could not find container \"e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384\": container with ID starting with e359f1b55f71f5763da2c57eca6d754c134b00fb6ed2b00aae987f1416d92384 not found: ID does not exist" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.134180 5024 scope.go:117] "RemoveContainer" containerID="1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36" Oct 07 13:02:03 crc kubenswrapper[5024]: E1007 13:02:03.135473 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36\": container with ID starting with 1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36 not found: ID does not exist" containerID="1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36" Oct 07 13:02:03 crc kubenswrapper[5024]: I1007 13:02:03.135515 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36"} err="failed to get container status \"1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36\": rpc error: code = NotFound desc = could not find container \"1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36\": container with ID starting with 1e67b24bf6c7972032dea8403e3a7b3b114650d6e4f90e68b5dd2bd069917d36 not found: ID does not exist" Oct 07 13:02:04 crc kubenswrapper[5024]: I1007 13:02:04.763272 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" path="/var/lib/kubelet/pods/4fadc5a8-2801-4081-bc37-02e3f25bd9d8/volumes" Oct 07 13:02:13 crc kubenswrapper[5024]: I1007 13:02:13.720482 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:02:13 crc kubenswrapper[5024]: I1007 13:02:13.721296 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:02:43 crc kubenswrapper[5024]: I1007 13:02:43.720472 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:02:43 crc kubenswrapper[5024]: I1007 13:02:43.721554 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.531119 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:10 crc kubenswrapper[5024]: E1007 13:03:10.535185 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="registry-server" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.535346 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="registry-server" Oct 07 13:03:10 crc kubenswrapper[5024]: E1007 13:03:10.535462 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="extract-content" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.535537 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="extract-content" Oct 07 13:03:10 crc kubenswrapper[5024]: E1007 13:03:10.535630 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="extract-utilities" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.535704 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="extract-utilities" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.536067 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fadc5a8-2801-4081-bc37-02e3f25bd9d8" containerName="registry-server" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.538374 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.554730 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.559470 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.559638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4kt\" (UniqueName: \"kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.560034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.661152 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.661491 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.661608 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4kt\" (UniqueName: \"kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.661686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.662186 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.682543 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4kt\" (UniqueName: \"kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt\") pod \"redhat-marketplace-8mr7r\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:10 crc kubenswrapper[5024]: I1007 13:03:10.874904 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:11 crc kubenswrapper[5024]: I1007 13:03:11.433323 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:11 crc kubenswrapper[5024]: I1007 13:03:11.661051 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerStarted","Data":"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42"} Oct 07 13:03:11 crc kubenswrapper[5024]: I1007 13:03:11.661100 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerStarted","Data":"11316ab87b39361e10689c730c44b08ee0e799aead6c4f042f98eb83446ff26e"} Oct 07 13:03:12 crc kubenswrapper[5024]: I1007 13:03:12.670164 5024 generic.go:334] "Generic (PLEG): container finished" podID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerID="aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42" exitCode=0 Oct 07 13:03:12 crc kubenswrapper[5024]: I1007 13:03:12.670240 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerDied","Data":"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42"} Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.720246 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.720618 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.720660 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.721494 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.721557 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2" gracePeriod=600 Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.908047 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.913084 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:13 crc kubenswrapper[5024]: I1007 13:03:13.923293 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.020579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.020784 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.020835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vd6l\" (UniqueName: \"kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.122572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.122713 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.122736 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vd6l\" (UniqueName: \"kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.123337 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.123414 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.142482 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vd6l\" (UniqueName: \"kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l\") pod \"certified-operators-hbw9f\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.316787 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.690114 5024 generic.go:334] "Generic (PLEG): container finished" podID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerID="41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a" exitCode=0 Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.690584 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerDied","Data":"41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a"} Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.706551 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2" exitCode=0 Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.709992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2"} Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.711597 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b"} Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.711625 5024 scope.go:117] "RemoveContainer" containerID="3df5f5e1be927979d08fdbb2e1c669dc3d0e55416c778cb73284b368f909d9f9" Oct 07 13:03:14 crc kubenswrapper[5024]: I1007 13:03:14.849584 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:14 crc kubenswrapper[5024]: W1007 13:03:14.857039 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd36073b8_f184_4ed9_aa18_f46d8459b4b7.slice/crio-be7a2fc071699dff130f840bffa7a4ef835779f2484cd0af179eceef205b74cc WatchSource:0}: Error finding container be7a2fc071699dff130f840bffa7a4ef835779f2484cd0af179eceef205b74cc: Status 404 returned error can't find the container with id be7a2fc071699dff130f840bffa7a4ef835779f2484cd0af179eceef205b74cc Oct 07 13:03:15 crc kubenswrapper[5024]: I1007 13:03:15.723679 5024 generic.go:334] "Generic (PLEG): container finished" podID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerID="488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f" exitCode=0 Oct 07 13:03:15 crc kubenswrapper[5024]: I1007 13:03:15.723826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerDied","Data":"488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f"} Oct 07 13:03:15 crc kubenswrapper[5024]: I1007 13:03:15.724753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerStarted","Data":"be7a2fc071699dff130f840bffa7a4ef835779f2484cd0af179eceef205b74cc"} Oct 07 13:03:15 crc kubenswrapper[5024]: I1007 13:03:15.732855 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerStarted","Data":"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b"} Oct 07 13:03:15 crc kubenswrapper[5024]: I1007 13:03:15.789281 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8mr7r" podStartSLOduration=3.302848034 podStartE2EDuration="5.789244041s" podCreationTimestamp="2025-10-07 13:03:10 +0000 UTC" firstStartedPulling="2025-10-07 13:03:12.672204868 +0000 UTC m=+2130.747991706" lastFinishedPulling="2025-10-07 13:03:15.158600875 +0000 UTC m=+2133.234387713" observedRunningTime="2025-10-07 13:03:15.774757862 +0000 UTC m=+2133.850544740" watchObservedRunningTime="2025-10-07 13:03:15.789244041 +0000 UTC m=+2133.865030909" Oct 07 13:03:16 crc kubenswrapper[5024]: I1007 13:03:16.747876 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerStarted","Data":"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe"} Oct 07 13:03:17 crc kubenswrapper[5024]: I1007 13:03:17.764155 5024 generic.go:334] "Generic (PLEG): container finished" podID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerID="81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe" exitCode=0 Oct 07 13:03:17 crc kubenswrapper[5024]: I1007 13:03:17.764227 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerDied","Data":"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe"} Oct 07 13:03:19 crc kubenswrapper[5024]: I1007 13:03:19.804451 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerStarted","Data":"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c"} Oct 07 13:03:19 crc kubenswrapper[5024]: I1007 13:03:19.841900 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbw9f" podStartSLOduration=3.9920157720000002 podStartE2EDuration="6.841865834s" podCreationTimestamp="2025-10-07 13:03:13 +0000 UTC" firstStartedPulling="2025-10-07 13:03:15.728202347 +0000 UTC m=+2133.803989185" lastFinishedPulling="2025-10-07 13:03:18.578052399 +0000 UTC m=+2136.653839247" observedRunningTime="2025-10-07 13:03:19.831665109 +0000 UTC m=+2137.907451967" watchObservedRunningTime="2025-10-07 13:03:19.841865834 +0000 UTC m=+2137.917652682" Oct 07 13:03:20 crc kubenswrapper[5024]: I1007 13:03:20.875749 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:20 crc kubenswrapper[5024]: I1007 13:03:20.875807 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:20 crc kubenswrapper[5024]: I1007 13:03:20.924842 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:21 crc kubenswrapper[5024]: I1007 13:03:21.882585 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:23 crc kubenswrapper[5024]: I1007 13:03:23.097539 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:23 crc kubenswrapper[5024]: I1007 13:03:23.849516 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8mr7r" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="registry-server" containerID="cri-o://86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b" gracePeriod=2 Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.301530 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.317324 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.317435 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.372991 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.385781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4kt\" (UniqueName: \"kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt\") pod \"a7d01a25-5580-44e2-9f61-4296a5a0556e\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.385833 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities\") pod \"a7d01a25-5580-44e2-9f61-4296a5a0556e\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.386008 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content\") pod \"a7d01a25-5580-44e2-9f61-4296a5a0556e\" (UID: \"a7d01a25-5580-44e2-9f61-4296a5a0556e\") " Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.386955 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities" (OuterVolumeSpecName: "utilities") pod "a7d01a25-5580-44e2-9f61-4296a5a0556e" (UID: "a7d01a25-5580-44e2-9f61-4296a5a0556e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.398829 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt" (OuterVolumeSpecName: "kube-api-access-bs4kt") pod "a7d01a25-5580-44e2-9f61-4296a5a0556e" (UID: "a7d01a25-5580-44e2-9f61-4296a5a0556e"). InnerVolumeSpecName "kube-api-access-bs4kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.405564 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7d01a25-5580-44e2-9f61-4296a5a0556e" (UID: "a7d01a25-5580-44e2-9f61-4296a5a0556e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.488353 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.488384 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4kt\" (UniqueName: \"kubernetes.io/projected/a7d01a25-5580-44e2-9f61-4296a5a0556e-kube-api-access-bs4kt\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.488396 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7d01a25-5580-44e2-9f61-4296a5a0556e-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.860932 5024 generic.go:334] "Generic (PLEG): container finished" podID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerID="86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b" exitCode=0 Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.860993 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mr7r" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.861049 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerDied","Data":"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b"} Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.861121 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mr7r" event={"ID":"a7d01a25-5580-44e2-9f61-4296a5a0556e","Type":"ContainerDied","Data":"11316ab87b39361e10689c730c44b08ee0e799aead6c4f042f98eb83446ff26e"} Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.861178 5024 scope.go:117] "RemoveContainer" containerID="86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.882511 5024 scope.go:117] "RemoveContainer" containerID="41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.885026 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.894395 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mr7r"] Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.909084 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.915479 5024 scope.go:117] "RemoveContainer" containerID="aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.946556 5024 scope.go:117] "RemoveContainer" containerID="86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b" Oct 07 13:03:24 crc kubenswrapper[5024]: E1007 13:03:24.947263 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b\": container with ID starting with 86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b not found: ID does not exist" containerID="86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.947310 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b"} err="failed to get container status \"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b\": rpc error: code = NotFound desc = could not find container \"86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b\": container with ID starting with 86d868cce87883a1da30be202097dd7fe248bbbaaf74c116c5c60778eeaf194b not found: ID does not exist" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.947361 5024 scope.go:117] "RemoveContainer" containerID="41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a" Oct 07 13:03:24 crc kubenswrapper[5024]: E1007 13:03:24.947752 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a\": container with ID starting with 41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a not found: ID does not exist" containerID="41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.947784 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a"} err="failed to get container status \"41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a\": rpc error: code = NotFound desc = could not find container \"41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a\": container with ID starting with 41d24a07602b5b59a1fe9b29972314e1d0d3dcdfb53318cf057bddf4bf392b1a not found: ID does not exist" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.947798 5024 scope.go:117] "RemoveContainer" containerID="aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42" Oct 07 13:03:24 crc kubenswrapper[5024]: E1007 13:03:24.948218 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42\": container with ID starting with aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42 not found: ID does not exist" containerID="aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42" Oct 07 13:03:24 crc kubenswrapper[5024]: I1007 13:03:24.948250 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42"} err="failed to get container status \"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42\": rpc error: code = NotFound desc = could not find container \"aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42\": container with ID starting with aa399f3254cf61c27c63518c72067575520afad8d64b265fcf0ecc1d5f3c8e42 not found: ID does not exist" Oct 07 13:03:26 crc kubenswrapper[5024]: I1007 13:03:26.703521 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:26 crc kubenswrapper[5024]: I1007 13:03:26.763716 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" path="/var/lib/kubelet/pods/a7d01a25-5580-44e2-9f61-4296a5a0556e/volumes" Oct 07 13:03:26 crc kubenswrapper[5024]: I1007 13:03:26.883523 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbw9f" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="registry-server" containerID="cri-o://8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c" gracePeriod=2 Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.404719 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.556782 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content\") pod \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.556976 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vd6l\" (UniqueName: \"kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l\") pod \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.557026 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities\") pod \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\" (UID: \"d36073b8-f184-4ed9-aa18-f46d8459b4b7\") " Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.558443 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities" (OuterVolumeSpecName: "utilities") pod "d36073b8-f184-4ed9-aa18-f46d8459b4b7" (UID: "d36073b8-f184-4ed9-aa18-f46d8459b4b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.563193 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l" (OuterVolumeSpecName: "kube-api-access-2vd6l") pod "d36073b8-f184-4ed9-aa18-f46d8459b4b7" (UID: "d36073b8-f184-4ed9-aa18-f46d8459b4b7"). InnerVolumeSpecName "kube-api-access-2vd6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.607921 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d36073b8-f184-4ed9-aa18-f46d8459b4b7" (UID: "d36073b8-f184-4ed9-aa18-f46d8459b4b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.659502 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vd6l\" (UniqueName: \"kubernetes.io/projected/d36073b8-f184-4ed9-aa18-f46d8459b4b7-kube-api-access-2vd6l\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.659550 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.659560 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36073b8-f184-4ed9-aa18-f46d8459b4b7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.892285 5024 generic.go:334] "Generic (PLEG): container finished" podID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerID="8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c" exitCode=0 Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.892358 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerDied","Data":"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c"} Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.892402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbw9f" event={"ID":"d36073b8-f184-4ed9-aa18-f46d8459b4b7","Type":"ContainerDied","Data":"be7a2fc071699dff130f840bffa7a4ef835779f2484cd0af179eceef205b74cc"} Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.892394 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbw9f" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.892421 5024 scope.go:117] "RemoveContainer" containerID="8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.931507 5024 scope.go:117] "RemoveContainer" containerID="81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.933941 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.945042 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbw9f"] Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.964594 5024 scope.go:117] "RemoveContainer" containerID="488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.996303 5024 scope.go:117] "RemoveContainer" containerID="8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c" Oct 07 13:03:27 crc kubenswrapper[5024]: E1007 13:03:27.996980 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c\": container with ID starting with 8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c not found: ID does not exist" containerID="8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.997055 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c"} err="failed to get container status \"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c\": rpc error: code = NotFound desc = could not find container \"8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c\": container with ID starting with 8a9f5f84610fb32686c6e4559926efd3738361b29f46062c6dcdb17cdd6d849c not found: ID does not exist" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.997088 5024 scope.go:117] "RemoveContainer" containerID="81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe" Oct 07 13:03:27 crc kubenswrapper[5024]: E1007 13:03:27.997625 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe\": container with ID starting with 81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe not found: ID does not exist" containerID="81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.997756 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe"} err="failed to get container status \"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe\": rpc error: code = NotFound desc = could not find container \"81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe\": container with ID starting with 81687fb597797402c82363d5a394e535c1ad1d8bba7a11cc41a3c1df363f48fe not found: ID does not exist" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.997817 5024 scope.go:117] "RemoveContainer" containerID="488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f" Oct 07 13:03:27 crc kubenswrapper[5024]: E1007 13:03:27.998261 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f\": container with ID starting with 488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f not found: ID does not exist" containerID="488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f" Oct 07 13:03:27 crc kubenswrapper[5024]: I1007 13:03:27.998295 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f"} err="failed to get container status \"488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f\": rpc error: code = NotFound desc = could not find container \"488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f\": container with ID starting with 488d85c25ad84ae845ab6a70cd479c3423c08cbb4363be1ea5c1c4769287ca1f not found: ID does not exist" Oct 07 13:03:28 crc kubenswrapper[5024]: I1007 13:03:28.771013 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" path="/var/lib/kubelet/pods/d36073b8-f184-4ed9-aa18-f46d8459b4b7/volumes" Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.326543 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.342684 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.356711 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t4cdc"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.365900 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-t5btx"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.373104 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.379327 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.386507 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.392892 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.398468 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.403740 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.410228 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2ltqv"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.416923 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f52k8"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.423524 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ddw85"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.432623 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ptg46"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.441347 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzsgx"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.450562 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.458771 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.465988 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2f67n"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.471732 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqvpm"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.477322 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ptg46"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.482491 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z5zf7"] Oct 07 13:03:35 crc kubenswrapper[5024]: I1007 13:03:35.487895 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vjcg8"] Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.772612 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7b80b0-5656-47b7-8da5-bd0b25255076" path="/var/lib/kubelet/pods/1b7b80b0-5656-47b7-8da5-bd0b25255076/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.774753 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3929fb-37ea-4655-a429-d1d4019751f4" path="/var/lib/kubelet/pods/2f3929fb-37ea-4655-a429-d1d4019751f4/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.775462 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e17556-df1d-48d3-b70a-9fe70ca23500" path="/var/lib/kubelet/pods/66e17556-df1d-48d3-b70a-9fe70ca23500/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.775994 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f04d138-3e0f-47a6-8bb5-3488ec712d2d" path="/var/lib/kubelet/pods/9f04d138-3e0f-47a6-8bb5-3488ec712d2d/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.777081 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfe274e-e517-4ccb-8539-3d3e0f87ad2b" path="/var/lib/kubelet/pods/acfe274e-e517-4ccb-8539-3d3e0f87ad2b/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.777621 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b6d878-915c-4356-bce8-14013e435c92" path="/var/lib/kubelet/pods/b9b6d878-915c-4356-bce8-14013e435c92/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.778174 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90ef245-e526-4462-aed6-43807ec3951f" path="/var/lib/kubelet/pods/c90ef245-e526-4462-aed6-43807ec3951f/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.779175 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6b8f22-a957-4a3c-b31c-6d9433ce6c80" path="/var/lib/kubelet/pods/ce6b8f22-a957-4a3c-b31c-6d9433ce6c80/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.779744 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4288bca-eb56-49e6-bf27-f5badea28e48" path="/var/lib/kubelet/pods/e4288bca-eb56-49e6-bf27-f5badea28e48/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.780278 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e73851-73a1-4e34-936f-dc608e4f490b" path="/var/lib/kubelet/pods/e8e73851-73a1-4e34-936f-dc608e4f490b/volumes" Oct 07 13:03:36 crc kubenswrapper[5024]: I1007 13:03:36.780789 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7afd213-475e-4426-913f-ec7b75850f3a" path="/var/lib/kubelet/pods/f7afd213-475e-4426-913f-ec7b75850f3a/volumes" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.579641 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26"] Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580796 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580812 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580829 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="extract-content" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580835 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="extract-content" Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580852 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="extract-utilities" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580860 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="extract-utilities" Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580874 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580880 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580897 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="extract-content" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580902 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="extract-content" Oct 07 13:03:41 crc kubenswrapper[5024]: E1007 13:03:41.580910 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="extract-utilities" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.580916 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="extract-utilities" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.581092 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36073b8-f184-4ed9-aa18-f46d8459b4b7" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.581108 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d01a25-5580-44e2-9f61-4296a5a0556e" containerName="registry-server" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.581868 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.584004 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.586511 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.586503 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.586624 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.586682 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.602741 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26"] Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.679976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9z52\" (UniqueName: \"kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.680326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.680410 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.680679 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.680790 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.782640 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.782709 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.782771 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.782823 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.782866 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9z52\" (UniqueName: \"kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.791941 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.792640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.793549 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.800367 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.816463 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9z52\" (UniqueName: \"kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:41 crc kubenswrapper[5024]: I1007 13:03:41.915829 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:42 crc kubenswrapper[5024]: I1007 13:03:42.515861 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26"] Oct 07 13:03:43 crc kubenswrapper[5024]: I1007 13:03:43.046870 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:03:43 crc kubenswrapper[5024]: I1007 13:03:43.058573 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" event={"ID":"8db00627-6d2c-4acb-915e-413ed2590639","Type":"ContainerStarted","Data":"acc7bd54bd276854e75a09e31aa8dab2c5be5bc53b4eaa5065259a02fe1f9c2d"} Oct 07 13:03:44 crc kubenswrapper[5024]: I1007 13:03:44.083778 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" event={"ID":"8db00627-6d2c-4acb-915e-413ed2590639","Type":"ContainerStarted","Data":"af95d389666179db8fc3f83485bb2915be7187130e2e193b6cb5a15e89924146"} Oct 07 13:03:44 crc kubenswrapper[5024]: I1007 13:03:44.123784 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" podStartSLOduration=2.6019326720000002 podStartE2EDuration="3.123752414s" podCreationTimestamp="2025-10-07 13:03:41 +0000 UTC" firstStartedPulling="2025-10-07 13:03:42.520126189 +0000 UTC m=+2160.595913027" lastFinishedPulling="2025-10-07 13:03:43.041945891 +0000 UTC m=+2161.117732769" observedRunningTime="2025-10-07 13:03:44.103930154 +0000 UTC m=+2162.179717002" watchObservedRunningTime="2025-10-07 13:03:44.123752414 +0000 UTC m=+2162.199539262" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.225105 5024 scope.go:117] "RemoveContainer" containerID="90a85f7e26de6dba4c9a70e33c338d3d4449b49b134bc94874fefd9d19c4cbef" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.287125 5024 scope.go:117] "RemoveContainer" containerID="00c612a88448688b35e79421d7447906b42bab903114840f6ba7d2ad9d0fe06d" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.350897 5024 scope.go:117] "RemoveContainer" containerID="aed39f75fbb9905306d4f1aeaa21a69f69e75ca142f47b21ec34d6326fb51ec2" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.386019 5024 scope.go:117] "RemoveContainer" containerID="d719ec6b642856c166681ace7bd66a4db4b7d023db1ebc6a8cd081870d920f0f" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.435491 5024 scope.go:117] "RemoveContainer" containerID="7184743f32f7e166451ff044b3658ff713a1db62f33ee9ce896760c3faeb0b57" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.498344 5024 scope.go:117] "RemoveContainer" containerID="5660f0861b1102a8db6957f35e9fda0da1e9827c1c4a1436efe162f88b8f50cd" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.547446 5024 scope.go:117] "RemoveContainer" containerID="4d7687b728655f2261e9bd988dbc6d46994ce17b2453148006568bd76e0dd7bc" Oct 07 13:03:47 crc kubenswrapper[5024]: I1007 13:03:47.593490 5024 scope.go:117] "RemoveContainer" containerID="fe37dd83f62bd1503813ddf10757a9fe25b64ed0b5accf5bff46c2e0f23aaf5f" Oct 07 13:03:55 crc kubenswrapper[5024]: I1007 13:03:55.236045 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" event={"ID":"8db00627-6d2c-4acb-915e-413ed2590639","Type":"ContainerDied","Data":"af95d389666179db8fc3f83485bb2915be7187130e2e193b6cb5a15e89924146"} Oct 07 13:03:55 crc kubenswrapper[5024]: I1007 13:03:55.236022 5024 generic.go:334] "Generic (PLEG): container finished" podID="8db00627-6d2c-4acb-915e-413ed2590639" containerID="af95d389666179db8fc3f83485bb2915be7187130e2e193b6cb5a15e89924146" exitCode=0 Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.699991 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.778873 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key\") pod \"8db00627-6d2c-4acb-915e-413ed2590639\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.779060 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle\") pod \"8db00627-6d2c-4acb-915e-413ed2590639\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.779220 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9z52\" (UniqueName: \"kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52\") pod \"8db00627-6d2c-4acb-915e-413ed2590639\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.779271 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph\") pod \"8db00627-6d2c-4acb-915e-413ed2590639\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.779361 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory\") pod \"8db00627-6d2c-4acb-915e-413ed2590639\" (UID: \"8db00627-6d2c-4acb-915e-413ed2590639\") " Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.786815 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8db00627-6d2c-4acb-915e-413ed2590639" (UID: "8db00627-6d2c-4acb-915e-413ed2590639"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.787621 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52" (OuterVolumeSpecName: "kube-api-access-h9z52") pod "8db00627-6d2c-4acb-915e-413ed2590639" (UID: "8db00627-6d2c-4acb-915e-413ed2590639"). InnerVolumeSpecName "kube-api-access-h9z52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.793449 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph" (OuterVolumeSpecName: "ceph") pod "8db00627-6d2c-4acb-915e-413ed2590639" (UID: "8db00627-6d2c-4acb-915e-413ed2590639"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.822442 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8db00627-6d2c-4acb-915e-413ed2590639" (UID: "8db00627-6d2c-4acb-915e-413ed2590639"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.826075 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory" (OuterVolumeSpecName: "inventory") pod "8db00627-6d2c-4acb-915e-413ed2590639" (UID: "8db00627-6d2c-4acb-915e-413ed2590639"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.882881 5024 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.882939 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9z52\" (UniqueName: \"kubernetes.io/projected/8db00627-6d2c-4acb-915e-413ed2590639-kube-api-access-h9z52\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.882955 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.882973 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:56 crc kubenswrapper[5024]: I1007 13:03:56.883036 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8db00627-6d2c-4acb-915e-413ed2590639-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.269675 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" event={"ID":"8db00627-6d2c-4acb-915e-413ed2590639","Type":"ContainerDied","Data":"acc7bd54bd276854e75a09e31aa8dab2c5be5bc53b4eaa5065259a02fe1f9c2d"} Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.269749 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc7bd54bd276854e75a09e31aa8dab2c5be5bc53b4eaa5065259a02fe1f9c2d" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.270124 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.444638 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt"] Oct 07 13:03:57 crc kubenswrapper[5024]: E1007 13:03:57.445388 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db00627-6d2c-4acb-915e-413ed2590639" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.445423 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db00627-6d2c-4acb-915e-413ed2590639" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.445686 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db00627-6d2c-4acb-915e-413ed2590639" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.446715 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.452204 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.452760 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.453032 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.453307 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.453511 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.480828 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt"] Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.609534 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvcl\" (UniqueName: \"kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.609611 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.609785 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.609890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.610526 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.713208 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.713284 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcvcl\" (UniqueName: \"kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.713317 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.713379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.713443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.720598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.720652 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.720737 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.723415 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.730970 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcvcl\" (UniqueName: \"kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:57 crc kubenswrapper[5024]: I1007 13:03:57.818960 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:03:58 crc kubenswrapper[5024]: I1007 13:03:58.468691 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt"] Oct 07 13:03:59 crc kubenswrapper[5024]: I1007 13:03:59.291770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" event={"ID":"410ef07b-0d0c-49c7-9e8d-a144b83b7bca","Type":"ContainerStarted","Data":"d491b54d38753f3ddaaf772979082f3c1fa03989b75f61bfa739887bf1c3bcdd"} Oct 07 13:04:00 crc kubenswrapper[5024]: I1007 13:04:00.307101 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" event={"ID":"410ef07b-0d0c-49c7-9e8d-a144b83b7bca","Type":"ContainerStarted","Data":"149bd05ec85bbd54ec51974cb44dcf6270d0d5093177b8d4637de9388650d1ec"} Oct 07 13:04:00 crc kubenswrapper[5024]: I1007 13:04:00.338929 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" podStartSLOduration=2.748352115 podStartE2EDuration="3.338905376s" podCreationTimestamp="2025-10-07 13:03:57 +0000 UTC" firstStartedPulling="2025-10-07 13:03:58.466410891 +0000 UTC m=+2176.542197749" lastFinishedPulling="2025-10-07 13:03:59.056964172 +0000 UTC m=+2177.132751010" observedRunningTime="2025-10-07 13:04:00.333755467 +0000 UTC m=+2178.409542305" watchObservedRunningTime="2025-10-07 13:04:00.338905376 +0000 UTC m=+2178.414692214" Oct 07 13:04:47 crc kubenswrapper[5024]: I1007 13:04:47.913372 5024 scope.go:117] "RemoveContainer" containerID="37d557787b580eebddbc6196abeff61a982715b60d5cd8ef6aee196067c7702e" Oct 07 13:04:47 crc kubenswrapper[5024]: I1007 13:04:47.970362 5024 scope.go:117] "RemoveContainer" containerID="a3637981e0ebcde4a3b77a55c99f1e086bfe84e30c0ec6da61b1c6ca71b1fa57" Oct 07 13:04:48 crc kubenswrapper[5024]: I1007 13:04:48.007188 5024 scope.go:117] "RemoveContainer" containerID="12c546a42f4258eb8ca8e356ca582d3cc42400dbe23238b1980b4e3bc856175b" Oct 07 13:05:42 crc kubenswrapper[5024]: I1007 13:05:42.370492 5024 generic.go:334] "Generic (PLEG): container finished" podID="410ef07b-0d0c-49c7-9e8d-a144b83b7bca" containerID="149bd05ec85bbd54ec51974cb44dcf6270d0d5093177b8d4637de9388650d1ec" exitCode=0 Oct 07 13:05:42 crc kubenswrapper[5024]: I1007 13:05:42.370643 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" event={"ID":"410ef07b-0d0c-49c7-9e8d-a144b83b7bca","Type":"ContainerDied","Data":"149bd05ec85bbd54ec51974cb44dcf6270d0d5093177b8d4637de9388650d1ec"} Oct 07 13:05:43 crc kubenswrapper[5024]: I1007 13:05:43.720882 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:05:43 crc kubenswrapper[5024]: I1007 13:05:43.721453 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:05:43 crc kubenswrapper[5024]: I1007 13:05:43.878459 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.015391 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory\") pod \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.015475 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key\") pod \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.015777 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle\") pod \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.015885 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph\") pod \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.015970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcvcl\" (UniqueName: \"kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl\") pod \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\" (UID: \"410ef07b-0d0c-49c7-9e8d-a144b83b7bca\") " Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.023470 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph" (OuterVolumeSpecName: "ceph") pod "410ef07b-0d0c-49c7-9e8d-a144b83b7bca" (UID: "410ef07b-0d0c-49c7-9e8d-a144b83b7bca"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.023506 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "410ef07b-0d0c-49c7-9e8d-a144b83b7bca" (UID: "410ef07b-0d0c-49c7-9e8d-a144b83b7bca"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.029274 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl" (OuterVolumeSpecName: "kube-api-access-qcvcl") pod "410ef07b-0d0c-49c7-9e8d-a144b83b7bca" (UID: "410ef07b-0d0c-49c7-9e8d-a144b83b7bca"). InnerVolumeSpecName "kube-api-access-qcvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.058763 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory" (OuterVolumeSpecName: "inventory") pod "410ef07b-0d0c-49c7-9e8d-a144b83b7bca" (UID: "410ef07b-0d0c-49c7-9e8d-a144b83b7bca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.068050 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "410ef07b-0d0c-49c7-9e8d-a144b83b7bca" (UID: "410ef07b-0d0c-49c7-9e8d-a144b83b7bca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.120732 5024 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.120774 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.120787 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcvcl\" (UniqueName: \"kubernetes.io/projected/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-kube-api-access-qcvcl\") on node \"crc\" DevicePath \"\"" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.120797 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.120806 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ef07b-0d0c-49c7-9e8d-a144b83b7bca-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.395391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" event={"ID":"410ef07b-0d0c-49c7-9e8d-a144b83b7bca","Type":"ContainerDied","Data":"d491b54d38753f3ddaaf772979082f3c1fa03989b75f61bfa739887bf1c3bcdd"} Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.395443 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d491b54d38753f3ddaaf772979082f3c1fa03989b75f61bfa739887bf1c3bcdd" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.395506 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.530935 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6"] Oct 07 13:05:44 crc kubenswrapper[5024]: E1007 13:05:44.531523 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ef07b-0d0c-49c7-9e8d-a144b83b7bca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.531549 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ef07b-0d0c-49c7-9e8d-a144b83b7bca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.531892 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ef07b-0d0c-49c7-9e8d-a144b83b7bca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.532962 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.538879 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.539648 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.539682 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.539775 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.542108 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.544414 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6"] Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.641012 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6vb5\" (UniqueName: \"kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.641588 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.641800 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.642001 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.744441 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.744496 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.744557 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6vb5\" (UniqueName: \"kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.744628 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.750998 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.750998 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.752732 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.769889 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6vb5\" (UniqueName: \"kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:44 crc kubenswrapper[5024]: I1007 13:05:44.865406 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:05:45 crc kubenswrapper[5024]: I1007 13:05:45.434042 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6"] Oct 07 13:05:46 crc kubenswrapper[5024]: I1007 13:05:46.428866 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" event={"ID":"be08cfea-37bb-4ebf-b56a-678a1e73ee4e","Type":"ContainerStarted","Data":"c7f1513978529032a52e9e5986c96c8481c4db64dbb22d0e9f1a34c14db3657d"} Oct 07 13:05:46 crc kubenswrapper[5024]: I1007 13:05:46.429441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" event={"ID":"be08cfea-37bb-4ebf-b56a-678a1e73ee4e","Type":"ContainerStarted","Data":"de7d8ef5c4c8a922f1c7e1941781265d844a5983e468b413fc3661d53670389b"} Oct 07 13:05:46 crc kubenswrapper[5024]: I1007 13:05:46.465572 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" podStartSLOduration=1.800092773 podStartE2EDuration="2.465541s" podCreationTimestamp="2025-10-07 13:05:44 +0000 UTC" firstStartedPulling="2025-10-07 13:05:45.437158925 +0000 UTC m=+2283.512945763" lastFinishedPulling="2025-10-07 13:05:46.102607142 +0000 UTC m=+2284.178393990" observedRunningTime="2025-10-07 13:05:46.453805872 +0000 UTC m=+2284.529592730" watchObservedRunningTime="2025-10-07 13:05:46.465541 +0000 UTC m=+2284.541327838" Oct 07 13:06:12 crc kubenswrapper[5024]: I1007 13:06:12.691942 5024 generic.go:334] "Generic (PLEG): container finished" podID="be08cfea-37bb-4ebf-b56a-678a1e73ee4e" containerID="c7f1513978529032a52e9e5986c96c8481c4db64dbb22d0e9f1a34c14db3657d" exitCode=0 Oct 07 13:06:12 crc kubenswrapper[5024]: I1007 13:06:12.692061 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" event={"ID":"be08cfea-37bb-4ebf-b56a-678a1e73ee4e","Type":"ContainerDied","Data":"c7f1513978529032a52e9e5986c96c8481c4db64dbb22d0e9f1a34c14db3657d"} Oct 07 13:06:13 crc kubenswrapper[5024]: I1007 13:06:13.720538 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:06:13 crc kubenswrapper[5024]: I1007 13:06:13.721054 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.174676 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.290444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key\") pod \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.290535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph\") pod \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.290732 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory\") pod \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.290931 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6vb5\" (UniqueName: \"kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5\") pod \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\" (UID: \"be08cfea-37bb-4ebf-b56a-678a1e73ee4e\") " Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.298962 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5" (OuterVolumeSpecName: "kube-api-access-x6vb5") pod "be08cfea-37bb-4ebf-b56a-678a1e73ee4e" (UID: "be08cfea-37bb-4ebf-b56a-678a1e73ee4e"). InnerVolumeSpecName "kube-api-access-x6vb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.299580 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph" (OuterVolumeSpecName: "ceph") pod "be08cfea-37bb-4ebf-b56a-678a1e73ee4e" (UID: "be08cfea-37bb-4ebf-b56a-678a1e73ee4e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.326170 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be08cfea-37bb-4ebf-b56a-678a1e73ee4e" (UID: "be08cfea-37bb-4ebf-b56a-678a1e73ee4e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.327259 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory" (OuterVolumeSpecName: "inventory") pod "be08cfea-37bb-4ebf-b56a-678a1e73ee4e" (UID: "be08cfea-37bb-4ebf-b56a-678a1e73ee4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.395600 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.395651 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.395661 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.395680 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6vb5\" (UniqueName: \"kubernetes.io/projected/be08cfea-37bb-4ebf-b56a-678a1e73ee4e-kube-api-access-x6vb5\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.710337 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" event={"ID":"be08cfea-37bb-4ebf-b56a-678a1e73ee4e","Type":"ContainerDied","Data":"de7d8ef5c4c8a922f1c7e1941781265d844a5983e468b413fc3661d53670389b"} Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.710398 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de7d8ef5c4c8a922f1c7e1941781265d844a5983e468b413fc3661d53670389b" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.710423 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.822383 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj"] Oct 07 13:06:14 crc kubenswrapper[5024]: E1007 13:06:14.823205 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be08cfea-37bb-4ebf-b56a-678a1e73ee4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.823227 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="be08cfea-37bb-4ebf-b56a-678a1e73ee4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.823599 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="be08cfea-37bb-4ebf-b56a-678a1e73ee4e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.826237 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.831322 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.831544 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.832093 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.832155 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.832370 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.843837 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj"] Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.914891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.915062 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.915101 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74f5\" (UniqueName: \"kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:14 crc kubenswrapper[5024]: I1007 13:06:14.915330 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.019094 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.019222 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74f5\" (UniqueName: \"kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.019335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.019528 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.024764 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.029655 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.030613 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.043826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74f5\" (UniqueName: \"kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.163901 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.560054 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj"] Oct 07 13:06:15 crc kubenswrapper[5024]: I1007 13:06:15.722794 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" event={"ID":"710c0046-3758-4f26-9513-b1064d858b9e","Type":"ContainerStarted","Data":"3873d4a17282b5e01464681105cb12538a151b1eb238fa346ec2780e5c669a3e"} Oct 07 13:06:16 crc kubenswrapper[5024]: I1007 13:06:16.732389 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" event={"ID":"710c0046-3758-4f26-9513-b1064d858b9e","Type":"ContainerStarted","Data":"68b5b5a99535af7c0823fea7b1121833e069398d99f09467767575e46c788a3b"} Oct 07 13:06:16 crc kubenswrapper[5024]: I1007 13:06:16.751573 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" podStartSLOduration=2.242829769 podStartE2EDuration="2.751553404s" podCreationTimestamp="2025-10-07 13:06:14 +0000 UTC" firstStartedPulling="2025-10-07 13:06:15.570429442 +0000 UTC m=+2313.646216280" lastFinishedPulling="2025-10-07 13:06:16.079153057 +0000 UTC m=+2314.154939915" observedRunningTime="2025-10-07 13:06:16.750996448 +0000 UTC m=+2314.826783286" watchObservedRunningTime="2025-10-07 13:06:16.751553404 +0000 UTC m=+2314.827340242" Oct 07 13:06:22 crc kubenswrapper[5024]: I1007 13:06:22.800881 5024 generic.go:334] "Generic (PLEG): container finished" podID="710c0046-3758-4f26-9513-b1064d858b9e" containerID="68b5b5a99535af7c0823fea7b1121833e069398d99f09467767575e46c788a3b" exitCode=0 Oct 07 13:06:22 crc kubenswrapper[5024]: I1007 13:06:22.800990 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" event={"ID":"710c0046-3758-4f26-9513-b1064d858b9e","Type":"ContainerDied","Data":"68b5b5a99535af7c0823fea7b1121833e069398d99f09467767575e46c788a3b"} Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.355812 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.453666 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74f5\" (UniqueName: \"kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5\") pod \"710c0046-3758-4f26-9513-b1064d858b9e\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.454445 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory\") pod \"710c0046-3758-4f26-9513-b1064d858b9e\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.454726 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph\") pod \"710c0046-3758-4f26-9513-b1064d858b9e\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.455974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key\") pod \"710c0046-3758-4f26-9513-b1064d858b9e\" (UID: \"710c0046-3758-4f26-9513-b1064d858b9e\") " Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.473387 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph" (OuterVolumeSpecName: "ceph") pod "710c0046-3758-4f26-9513-b1064d858b9e" (UID: "710c0046-3758-4f26-9513-b1064d858b9e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.485394 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5" (OuterVolumeSpecName: "kube-api-access-m74f5") pod "710c0046-3758-4f26-9513-b1064d858b9e" (UID: "710c0046-3758-4f26-9513-b1064d858b9e"). InnerVolumeSpecName "kube-api-access-m74f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.490842 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "710c0046-3758-4f26-9513-b1064d858b9e" (UID: "710c0046-3758-4f26-9513-b1064d858b9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.494187 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory" (OuterVolumeSpecName: "inventory") pod "710c0046-3758-4f26-9513-b1064d858b9e" (UID: "710c0046-3758-4f26-9513-b1064d858b9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.565800 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.565858 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.565875 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/710c0046-3758-4f26-9513-b1064d858b9e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.565888 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74f5\" (UniqueName: \"kubernetes.io/projected/710c0046-3758-4f26-9513-b1064d858b9e-kube-api-access-m74f5\") on node \"crc\" DevicePath \"\"" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.824026 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" event={"ID":"710c0046-3758-4f26-9513-b1064d858b9e","Type":"ContainerDied","Data":"3873d4a17282b5e01464681105cb12538a151b1eb238fa346ec2780e5c669a3e"} Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.824081 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.824088 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3873d4a17282b5e01464681105cb12538a151b1eb238fa346ec2780e5c669a3e" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.897309 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf"] Oct 07 13:06:24 crc kubenswrapper[5024]: E1007 13:06:24.897978 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710c0046-3758-4f26-9513-b1064d858b9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.898006 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="710c0046-3758-4f26-9513-b1064d858b9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.898281 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="710c0046-3758-4f26-9513-b1064d858b9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.899283 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.902039 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.902071 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.902394 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.902464 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.902733 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.914540 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf"] Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.974482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.974544 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7hvs\" (UniqueName: \"kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.974627 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:24 crc kubenswrapper[5024]: I1007 13:06:24.974659 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.077102 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.077181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7hvs\" (UniqueName: \"kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.077262 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.077293 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.085012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.086658 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.086990 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.098029 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7hvs\" (UniqueName: \"kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g49rf\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.226497 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:06:25 crc kubenswrapper[5024]: I1007 13:06:25.920929 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf"] Oct 07 13:06:26 crc kubenswrapper[5024]: I1007 13:06:26.845023 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" event={"ID":"82eb373d-d00e-4509-bbaf-728b2cb90b80","Type":"ContainerStarted","Data":"0023ca50536b47c4b4583580ee4c65fe624fe2b3f96c4f299dbbf18dcea319b9"} Oct 07 13:06:27 crc kubenswrapper[5024]: I1007 13:06:27.857059 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" event={"ID":"82eb373d-d00e-4509-bbaf-728b2cb90b80","Type":"ContainerStarted","Data":"b754e280bdfd2e6dc185d44df1c41081fa1b06c519cbfcd548e986117e029711"} Oct 07 13:06:27 crc kubenswrapper[5024]: I1007 13:06:27.897034 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" podStartSLOduration=3.175035415 podStartE2EDuration="3.897008459s" podCreationTimestamp="2025-10-07 13:06:24 +0000 UTC" firstStartedPulling="2025-10-07 13:06:25.931084505 +0000 UTC m=+2324.006871343" lastFinishedPulling="2025-10-07 13:06:26.653057559 +0000 UTC m=+2324.728844387" observedRunningTime="2025-10-07 13:06:27.881790631 +0000 UTC m=+2325.957577489" watchObservedRunningTime="2025-10-07 13:06:27.897008459 +0000 UTC m=+2325.972795297" Oct 07 13:06:43 crc kubenswrapper[5024]: I1007 13:06:43.720705 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:06:43 crc kubenswrapper[5024]: I1007 13:06:43.721548 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:06:43 crc kubenswrapper[5024]: I1007 13:06:43.721622 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:06:43 crc kubenswrapper[5024]: I1007 13:06:43.723286 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:06:43 crc kubenswrapper[5024]: I1007 13:06:43.723489 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" gracePeriod=600 Oct 07 13:06:43 crc kubenswrapper[5024]: E1007 13:06:43.860252 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:06:44 crc kubenswrapper[5024]: I1007 13:06:44.035554 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" exitCode=0 Oct 07 13:06:44 crc kubenswrapper[5024]: I1007 13:06:44.035645 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b"} Oct 07 13:06:44 crc kubenswrapper[5024]: I1007 13:06:44.036083 5024 scope.go:117] "RemoveContainer" containerID="fa5bef8997e225ec3cb53bc4a8212ef4374b844b1594898d7b97b9838a5326e2" Oct 07 13:06:44 crc kubenswrapper[5024]: I1007 13:06:44.039032 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:06:44 crc kubenswrapper[5024]: E1007 13:06:44.040361 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:06:52 crc kubenswrapper[5024]: I1007 13:06:52.951786 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:06:52 crc kubenswrapper[5024]: I1007 13:06:52.954820 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:52 crc kubenswrapper[5024]: I1007 13:06:52.975596 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.086573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.086660 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.086735 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plbc6\" (UniqueName: \"kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.188763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.188825 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.188873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plbc6\" (UniqueName: \"kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.189715 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.189865 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.227260 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plbc6\" (UniqueName: \"kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6\") pod \"community-operators-fgj9p\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.293246 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:06:53 crc kubenswrapper[5024]: I1007 13:06:53.624584 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:06:54 crc kubenswrapper[5024]: I1007 13:06:54.149560 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerID="d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e" exitCode=0 Oct 07 13:06:54 crc kubenswrapper[5024]: I1007 13:06:54.149615 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerDied","Data":"d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e"} Oct 07 13:06:54 crc kubenswrapper[5024]: I1007 13:06:54.150033 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerStarted","Data":"660e1f68adac9181b8539abfedaffcef421a3098f5eade1d4b3c40948f052b0d"} Oct 07 13:06:54 crc kubenswrapper[5024]: I1007 13:06:54.152528 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:06:55 crc kubenswrapper[5024]: I1007 13:06:55.161202 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerStarted","Data":"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d"} Oct 07 13:06:56 crc kubenswrapper[5024]: I1007 13:06:56.193386 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerID="4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d" exitCode=0 Oct 07 13:06:56 crc kubenswrapper[5024]: I1007 13:06:56.193480 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerDied","Data":"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d"} Oct 07 13:06:57 crc kubenswrapper[5024]: I1007 13:06:57.205024 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerStarted","Data":"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67"} Oct 07 13:06:57 crc kubenswrapper[5024]: I1007 13:06:57.239063 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fgj9p" podStartSLOduration=2.68865729 podStartE2EDuration="5.23903689s" podCreationTimestamp="2025-10-07 13:06:52 +0000 UTC" firstStartedPulling="2025-10-07 13:06:54.152230307 +0000 UTC m=+2352.228017145" lastFinishedPulling="2025-10-07 13:06:56.702609887 +0000 UTC m=+2354.778396745" observedRunningTime="2025-10-07 13:06:57.231621276 +0000 UTC m=+2355.307408104" watchObservedRunningTime="2025-10-07 13:06:57.23903689 +0000 UTC m=+2355.314823728" Oct 07 13:06:58 crc kubenswrapper[5024]: I1007 13:06:58.751709 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:06:58 crc kubenswrapper[5024]: E1007 13:06:58.752490 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:07:03 crc kubenswrapper[5024]: I1007 13:07:03.294233 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:03 crc kubenswrapper[5024]: I1007 13:07:03.294852 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:03 crc kubenswrapper[5024]: I1007 13:07:03.369163 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:04 crc kubenswrapper[5024]: I1007 13:07:04.384128 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:04 crc kubenswrapper[5024]: I1007 13:07:04.445695 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.311312 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fgj9p" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="registry-server" containerID="cri-o://4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67" gracePeriod=2 Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.804894 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.922953 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content\") pod \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.923682 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plbc6\" (UniqueName: \"kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6\") pod \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.923875 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities\") pod \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\" (UID: \"0cf75667-60bd-40a6-a3d3-f7e1e11ae995\") " Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.924928 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities" (OuterVolumeSpecName: "utilities") pod "0cf75667-60bd-40a6-a3d3-f7e1e11ae995" (UID: "0cf75667-60bd-40a6-a3d3-f7e1e11ae995"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.930864 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6" (OuterVolumeSpecName: "kube-api-access-plbc6") pod "0cf75667-60bd-40a6-a3d3-f7e1e11ae995" (UID: "0cf75667-60bd-40a6-a3d3-f7e1e11ae995"). InnerVolumeSpecName "kube-api-access-plbc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:07:06 crc kubenswrapper[5024]: I1007 13:07:06.978686 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf75667-60bd-40a6-a3d3-f7e1e11ae995" (UID: "0cf75667-60bd-40a6-a3d3-f7e1e11ae995"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.026786 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plbc6\" (UniqueName: \"kubernetes.io/projected/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-kube-api-access-plbc6\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.026834 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.026849 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf75667-60bd-40a6-a3d3-f7e1e11ae995-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.328387 5024 generic.go:334] "Generic (PLEG): container finished" podID="82eb373d-d00e-4509-bbaf-728b2cb90b80" containerID="b754e280bdfd2e6dc185d44df1c41081fa1b06c519cbfcd548e986117e029711" exitCode=0 Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.328491 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" event={"ID":"82eb373d-d00e-4509-bbaf-728b2cb90b80","Type":"ContainerDied","Data":"b754e280bdfd2e6dc185d44df1c41081fa1b06c519cbfcd548e986117e029711"} Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.334915 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerID="4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67" exitCode=0 Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.334983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerDied","Data":"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67"} Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.335021 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgj9p" event={"ID":"0cf75667-60bd-40a6-a3d3-f7e1e11ae995","Type":"ContainerDied","Data":"660e1f68adac9181b8539abfedaffcef421a3098f5eade1d4b3c40948f052b0d"} Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.335042 5024 scope.go:117] "RemoveContainer" containerID="4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.335091 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgj9p" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.369194 5024 scope.go:117] "RemoveContainer" containerID="4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.412719 5024 scope.go:117] "RemoveContainer" containerID="d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.417351 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.428565 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fgj9p"] Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.459190 5024 scope.go:117] "RemoveContainer" containerID="4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67" Oct 07 13:07:07 crc kubenswrapper[5024]: E1007 13:07:07.459937 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67\": container with ID starting with 4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67 not found: ID does not exist" containerID="4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.459989 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67"} err="failed to get container status \"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67\": rpc error: code = NotFound desc = could not find container \"4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67\": container with ID starting with 4a0c11557691fd86d06caec30e5f766952be8557d525c223b1a7388379743c67 not found: ID does not exist" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.460028 5024 scope.go:117] "RemoveContainer" containerID="4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d" Oct 07 13:07:07 crc kubenswrapper[5024]: E1007 13:07:07.460544 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d\": container with ID starting with 4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d not found: ID does not exist" containerID="4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.460572 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d"} err="failed to get container status \"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d\": rpc error: code = NotFound desc = could not find container \"4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d\": container with ID starting with 4771b24d9afcd49bcaa6f7b8c144a431c49590f11e9a48d8c373d2592090808d not found: ID does not exist" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.460590 5024 scope.go:117] "RemoveContainer" containerID="d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e" Oct 07 13:07:07 crc kubenswrapper[5024]: E1007 13:07:07.461017 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e\": container with ID starting with d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e not found: ID does not exist" containerID="d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e" Oct 07 13:07:07 crc kubenswrapper[5024]: I1007 13:07:07.461191 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e"} err="failed to get container status \"d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e\": rpc error: code = NotFound desc = could not find container \"d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e\": container with ID starting with d932c262791c1aec889bff159f5c7d42a96908f2b26b3b1ec01073ce13cc070e not found: ID does not exist" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.767799 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" path="/var/lib/kubelet/pods/0cf75667-60bd-40a6-a3d3-f7e1e11ae995/volumes" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.828330 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.870628 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph\") pod \"82eb373d-d00e-4509-bbaf-728b2cb90b80\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.871166 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory\") pod \"82eb373d-d00e-4509-bbaf-728b2cb90b80\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.871320 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7hvs\" (UniqueName: \"kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs\") pod \"82eb373d-d00e-4509-bbaf-728b2cb90b80\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.871494 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key\") pod \"82eb373d-d00e-4509-bbaf-728b2cb90b80\" (UID: \"82eb373d-d00e-4509-bbaf-728b2cb90b80\") " Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.879670 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs" (OuterVolumeSpecName: "kube-api-access-z7hvs") pod "82eb373d-d00e-4509-bbaf-728b2cb90b80" (UID: "82eb373d-d00e-4509-bbaf-728b2cb90b80"). InnerVolumeSpecName "kube-api-access-z7hvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.879693 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph" (OuterVolumeSpecName: "ceph") pod "82eb373d-d00e-4509-bbaf-728b2cb90b80" (UID: "82eb373d-d00e-4509-bbaf-728b2cb90b80"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.901792 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "82eb373d-d00e-4509-bbaf-728b2cb90b80" (UID: "82eb373d-d00e-4509-bbaf-728b2cb90b80"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.902384 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory" (OuterVolumeSpecName: "inventory") pod "82eb373d-d00e-4509-bbaf-728b2cb90b80" (UID: "82eb373d-d00e-4509-bbaf-728b2cb90b80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.973754 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.974095 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7hvs\" (UniqueName: \"kubernetes.io/projected/82eb373d-d00e-4509-bbaf-728b2cb90b80-kube-api-access-z7hvs\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.974122 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:08 crc kubenswrapper[5024]: I1007 13:07:08.974150 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82eb373d-d00e-4509-bbaf-728b2cb90b80-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.363559 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.363482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g49rf" event={"ID":"82eb373d-d00e-4509-bbaf-728b2cb90b80","Type":"ContainerDied","Data":"0023ca50536b47c4b4583580ee4c65fe624fe2b3f96c4f299dbbf18dcea319b9"} Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.364297 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0023ca50536b47c4b4583580ee4c65fe624fe2b3f96c4f299dbbf18dcea319b9" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.460785 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk"] Oct 07 13:07:09 crc kubenswrapper[5024]: E1007 13:07:09.462383 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="extract-utilities" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.462418 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="extract-utilities" Oct 07 13:07:09 crc kubenswrapper[5024]: E1007 13:07:09.462440 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82eb373d-d00e-4509-bbaf-728b2cb90b80" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.462458 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="82eb373d-d00e-4509-bbaf-728b2cb90b80" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:09 crc kubenswrapper[5024]: E1007 13:07:09.462563 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="extract-content" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.462577 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="extract-content" Oct 07 13:07:09 crc kubenswrapper[5024]: E1007 13:07:09.462605 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="registry-server" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.462617 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="registry-server" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.463822 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="82eb373d-d00e-4509-bbaf-728b2cb90b80" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.463952 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf75667-60bd-40a6-a3d3-f7e1e11ae995" containerName="registry-server" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.465666 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.469028 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.477287 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.477740 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.477938 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.478127 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.492993 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.493421 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.493975 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.494972 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hct9d\" (UniqueName: \"kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.500401 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk"] Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.597872 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.598404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hct9d\" (UniqueName: \"kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.598594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.598763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.603666 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.605443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.614018 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.622023 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hct9d\" (UniqueName: \"kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:09 crc kubenswrapper[5024]: I1007 13:07:09.831008 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:10 crc kubenswrapper[5024]: I1007 13:07:10.419833 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk"] Oct 07 13:07:11 crc kubenswrapper[5024]: I1007 13:07:11.393770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" event={"ID":"4270239b-72c9-4e60-938e-77db772605ed","Type":"ContainerStarted","Data":"90e306ba36abb3d6315024bab38e74bb7e7af3d71ab9840c22e84780b9806091"} Oct 07 13:07:11 crc kubenswrapper[5024]: I1007 13:07:11.395553 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" event={"ID":"4270239b-72c9-4e60-938e-77db772605ed","Type":"ContainerStarted","Data":"65772dd037c5f88cb440b0d215e472a1e31e783097d572ad0b385e9d31b2ec39"} Oct 07 13:07:11 crc kubenswrapper[5024]: I1007 13:07:11.423923 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" podStartSLOduration=1.999155074 podStartE2EDuration="2.423893862s" podCreationTimestamp="2025-10-07 13:07:09 +0000 UTC" firstStartedPulling="2025-10-07 13:07:10.426087317 +0000 UTC m=+2368.501874165" lastFinishedPulling="2025-10-07 13:07:10.850826115 +0000 UTC m=+2368.926612953" observedRunningTime="2025-10-07 13:07:11.418811486 +0000 UTC m=+2369.494598344" watchObservedRunningTime="2025-10-07 13:07:11.423893862 +0000 UTC m=+2369.499680720" Oct 07 13:07:12 crc kubenswrapper[5024]: I1007 13:07:12.762512 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:07:12 crc kubenswrapper[5024]: E1007 13:07:12.763058 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:07:16 crc kubenswrapper[5024]: I1007 13:07:16.466324 5024 generic.go:334] "Generic (PLEG): container finished" podID="4270239b-72c9-4e60-938e-77db772605ed" containerID="90e306ba36abb3d6315024bab38e74bb7e7af3d71ab9840c22e84780b9806091" exitCode=0 Oct 07 13:07:16 crc kubenswrapper[5024]: I1007 13:07:16.466429 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" event={"ID":"4270239b-72c9-4e60-938e-77db772605ed","Type":"ContainerDied","Data":"90e306ba36abb3d6315024bab38e74bb7e7af3d71ab9840c22e84780b9806091"} Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.012920 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.102279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory\") pod \"4270239b-72c9-4e60-938e-77db772605ed\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.102400 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph\") pod \"4270239b-72c9-4e60-938e-77db772605ed\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.102664 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hct9d\" (UniqueName: \"kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d\") pod \"4270239b-72c9-4e60-938e-77db772605ed\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.102798 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key\") pod \"4270239b-72c9-4e60-938e-77db772605ed\" (UID: \"4270239b-72c9-4e60-938e-77db772605ed\") " Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.110588 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d" (OuterVolumeSpecName: "kube-api-access-hct9d") pod "4270239b-72c9-4e60-938e-77db772605ed" (UID: "4270239b-72c9-4e60-938e-77db772605ed"). InnerVolumeSpecName "kube-api-access-hct9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.117216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph" (OuterVolumeSpecName: "ceph") pod "4270239b-72c9-4e60-938e-77db772605ed" (UID: "4270239b-72c9-4e60-938e-77db772605ed"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.133311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory" (OuterVolumeSpecName: "inventory") pod "4270239b-72c9-4e60-938e-77db772605ed" (UID: "4270239b-72c9-4e60-938e-77db772605ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.135031 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4270239b-72c9-4e60-938e-77db772605ed" (UID: "4270239b-72c9-4e60-938e-77db772605ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.205163 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hct9d\" (UniqueName: \"kubernetes.io/projected/4270239b-72c9-4e60-938e-77db772605ed-kube-api-access-hct9d\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.205210 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.205224 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.205236 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4270239b-72c9-4e60-938e-77db772605ed-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.510072 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" event={"ID":"4270239b-72c9-4e60-938e-77db772605ed","Type":"ContainerDied","Data":"65772dd037c5f88cb440b0d215e472a1e31e783097d572ad0b385e9d31b2ec39"} Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.510206 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65772dd037c5f88cb440b0d215e472a1e31e783097d572ad0b385e9d31b2ec39" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.510571 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.601815 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5"] Oct 07 13:07:18 crc kubenswrapper[5024]: E1007 13:07:18.604413 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4270239b-72c9-4e60-938e-77db772605ed" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.604450 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4270239b-72c9-4e60-938e-77db772605ed" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.605382 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4270239b-72c9-4e60-938e-77db772605ed" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.609410 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.614550 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.615390 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.615521 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.615768 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.619378 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.628522 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.628764 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgd4\" (UniqueName: \"kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.628889 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.628947 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.633534 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5"] Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.731080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dgd4\" (UniqueName: \"kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.731179 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.731209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.731249 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.737726 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.738836 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.741049 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.750863 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dgd4\" (UniqueName: \"kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:18 crc kubenswrapper[5024]: I1007 13:07:18.936882 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:07:19 crc kubenswrapper[5024]: I1007 13:07:19.625125 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5"] Oct 07 13:07:20 crc kubenswrapper[5024]: I1007 13:07:20.534770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" event={"ID":"b17f8fee-5248-4b95-b6a3-35ea547dbb4c","Type":"ContainerStarted","Data":"e1e9ee85abf4f928b3fea4a56bf7ef6ff16e8ce3c8b81726879ecdcc7082f424"} Oct 07 13:07:20 crc kubenswrapper[5024]: I1007 13:07:20.535274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" event={"ID":"b17f8fee-5248-4b95-b6a3-35ea547dbb4c","Type":"ContainerStarted","Data":"dc76aab60f4a387cc4d667b12bf5f8235813f6465d45ef9e0a22b7c16c1e64f6"} Oct 07 13:07:20 crc kubenswrapper[5024]: I1007 13:07:20.568652 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" podStartSLOduration=2.133192906 podStartE2EDuration="2.568625482s" podCreationTimestamp="2025-10-07 13:07:18 +0000 UTC" firstStartedPulling="2025-10-07 13:07:19.628730584 +0000 UTC m=+2377.704517422" lastFinishedPulling="2025-10-07 13:07:20.06416316 +0000 UTC m=+2378.139949998" observedRunningTime="2025-10-07 13:07:20.560392495 +0000 UTC m=+2378.636179343" watchObservedRunningTime="2025-10-07 13:07:20.568625482 +0000 UTC m=+2378.644412320" Oct 07 13:07:23 crc kubenswrapper[5024]: I1007 13:07:23.753894 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:07:23 crc kubenswrapper[5024]: E1007 13:07:23.755160 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:07:28 crc kubenswrapper[5024]: E1007 13:07:28.003046 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Oct 07 13:07:34 crc kubenswrapper[5024]: I1007 13:07:34.753346 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:07:34 crc kubenswrapper[5024]: E1007 13:07:34.754664 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:07:45 crc kubenswrapper[5024]: I1007 13:07:45.752580 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:07:45 crc kubenswrapper[5024]: E1007 13:07:45.754429 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:07:59 crc kubenswrapper[5024]: I1007 13:07:59.751389 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:07:59 crc kubenswrapper[5024]: E1007 13:07:59.752390 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:08:11 crc kubenswrapper[5024]: I1007 13:08:11.121663 5024 generic.go:334] "Generic (PLEG): container finished" podID="b17f8fee-5248-4b95-b6a3-35ea547dbb4c" containerID="e1e9ee85abf4f928b3fea4a56bf7ef6ff16e8ce3c8b81726879ecdcc7082f424" exitCode=0 Oct 07 13:08:11 crc kubenswrapper[5024]: I1007 13:08:11.121815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" event={"ID":"b17f8fee-5248-4b95-b6a3-35ea547dbb4c","Type":"ContainerDied","Data":"e1e9ee85abf4f928b3fea4a56bf7ef6ff16e8ce3c8b81726879ecdcc7082f424"} Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.620169 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.752177 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory\") pod \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.752592 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key\") pod \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.753010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph\") pod \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.754280 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dgd4\" (UniqueName: \"kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4\") pod \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\" (UID: \"b17f8fee-5248-4b95-b6a3-35ea547dbb4c\") " Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.760796 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph" (OuterVolumeSpecName: "ceph") pod "b17f8fee-5248-4b95-b6a3-35ea547dbb4c" (UID: "b17f8fee-5248-4b95-b6a3-35ea547dbb4c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.765664 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4" (OuterVolumeSpecName: "kube-api-access-9dgd4") pod "b17f8fee-5248-4b95-b6a3-35ea547dbb4c" (UID: "b17f8fee-5248-4b95-b6a3-35ea547dbb4c"). InnerVolumeSpecName "kube-api-access-9dgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.789976 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory" (OuterVolumeSpecName: "inventory") pod "b17f8fee-5248-4b95-b6a3-35ea547dbb4c" (UID: "b17f8fee-5248-4b95-b6a3-35ea547dbb4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.801894 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b17f8fee-5248-4b95-b6a3-35ea547dbb4c" (UID: "b17f8fee-5248-4b95-b6a3-35ea547dbb4c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.859125 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.859291 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dgd4\" (UniqueName: \"kubernetes.io/projected/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-kube-api-access-9dgd4\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.859321 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:12 crc kubenswrapper[5024]: I1007 13:08:12.859341 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17f8fee-5248-4b95-b6a3-35ea547dbb4c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.153066 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" event={"ID":"b17f8fee-5248-4b95-b6a3-35ea547dbb4c","Type":"ContainerDied","Data":"dc76aab60f4a387cc4d667b12bf5f8235813f6465d45ef9e0a22b7c16c1e64f6"} Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.153131 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc76aab60f4a387cc4d667b12bf5f8235813f6465d45ef9e0a22b7c16c1e64f6" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.153210 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.259032 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9ljl7"] Oct 07 13:08:13 crc kubenswrapper[5024]: E1007 13:08:13.259732 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f8fee-5248-4b95-b6a3-35ea547dbb4c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.259765 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f8fee-5248-4b95-b6a3-35ea547dbb4c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.260066 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17f8fee-5248-4b95-b6a3-35ea547dbb4c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.261120 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.264312 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.264554 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.264761 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.264984 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.266081 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.274452 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9ljl7"] Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.369959 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.370067 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.370614 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.370914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v82k\" (UniqueName: \"kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.473911 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.474174 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v82k\" (UniqueName: \"kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.474320 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.474380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.480833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.480999 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.481464 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.497754 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v82k\" (UniqueName: \"kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k\") pod \"ssh-known-hosts-edpm-deployment-9ljl7\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:13 crc kubenswrapper[5024]: I1007 13:08:13.582581 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:14 crc kubenswrapper[5024]: I1007 13:08:14.263291 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9ljl7"] Oct 07 13:08:14 crc kubenswrapper[5024]: I1007 13:08:14.752439 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:08:14 crc kubenswrapper[5024]: E1007 13:08:14.753466 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:08:15 crc kubenswrapper[5024]: I1007 13:08:15.179624 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" event={"ID":"e49b99f2-b6f5-4046-8e7e-c933dd49411b","Type":"ContainerStarted","Data":"bc0ee1610c5e7a3468147593ee3f52ba6b6a81db3d13157fc9340add18d11530"} Oct 07 13:08:16 crc kubenswrapper[5024]: I1007 13:08:16.189753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" event={"ID":"e49b99f2-b6f5-4046-8e7e-c933dd49411b","Type":"ContainerStarted","Data":"fc7b101d3fd59291f1caec7975fee1ca1c70a4dd84a6b2c12ae7053ff7b4dfcd"} Oct 07 13:08:16 crc kubenswrapper[5024]: I1007 13:08:16.219742 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" podStartSLOduration=2.465571176 podStartE2EDuration="3.219709994s" podCreationTimestamp="2025-10-07 13:08:13 +0000 UTC" firstStartedPulling="2025-10-07 13:08:14.26611756 +0000 UTC m=+2432.341904428" lastFinishedPulling="2025-10-07 13:08:15.020256408 +0000 UTC m=+2433.096043246" observedRunningTime="2025-10-07 13:08:16.207533984 +0000 UTC m=+2434.283320822" watchObservedRunningTime="2025-10-07 13:08:16.219709994 +0000 UTC m=+2434.295496842" Oct 07 13:08:26 crc kubenswrapper[5024]: I1007 13:08:26.315332 5024 generic.go:334] "Generic (PLEG): container finished" podID="e49b99f2-b6f5-4046-8e7e-c933dd49411b" containerID="fc7b101d3fd59291f1caec7975fee1ca1c70a4dd84a6b2c12ae7053ff7b4dfcd" exitCode=0 Oct 07 13:08:26 crc kubenswrapper[5024]: I1007 13:08:26.315455 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" event={"ID":"e49b99f2-b6f5-4046-8e7e-c933dd49411b","Type":"ContainerDied","Data":"fc7b101d3fd59291f1caec7975fee1ca1c70a4dd84a6b2c12ae7053ff7b4dfcd"} Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.752210 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:08:27 crc kubenswrapper[5024]: E1007 13:08:27.752985 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.776538 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.991927 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph\") pod \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.992298 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0\") pod \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.992344 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v82k\" (UniqueName: \"kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k\") pod \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " Oct 07 13:08:27 crc kubenswrapper[5024]: I1007 13:08:27.992460 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam\") pod \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\" (UID: \"e49b99f2-b6f5-4046-8e7e-c933dd49411b\") " Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.001289 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph" (OuterVolumeSpecName: "ceph") pod "e49b99f2-b6f5-4046-8e7e-c933dd49411b" (UID: "e49b99f2-b6f5-4046-8e7e-c933dd49411b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.003739 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k" (OuterVolumeSpecName: "kube-api-access-7v82k") pod "e49b99f2-b6f5-4046-8e7e-c933dd49411b" (UID: "e49b99f2-b6f5-4046-8e7e-c933dd49411b"). InnerVolumeSpecName "kube-api-access-7v82k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.023318 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e49b99f2-b6f5-4046-8e7e-c933dd49411b" (UID: "e49b99f2-b6f5-4046-8e7e-c933dd49411b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.023409 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e49b99f2-b6f5-4046-8e7e-c933dd49411b" (UID: "e49b99f2-b6f5-4046-8e7e-c933dd49411b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.095489 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.095525 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.095538 5024 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e49b99f2-b6f5-4046-8e7e-c933dd49411b-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.095551 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v82k\" (UniqueName: \"kubernetes.io/projected/e49b99f2-b6f5-4046-8e7e-c933dd49411b-kube-api-access-7v82k\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.342248 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" event={"ID":"e49b99f2-b6f5-4046-8e7e-c933dd49411b","Type":"ContainerDied","Data":"bc0ee1610c5e7a3468147593ee3f52ba6b6a81db3d13157fc9340add18d11530"} Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.342312 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0ee1610c5e7a3468147593ee3f52ba6b6a81db3d13157fc9340add18d11530" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.342371 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9ljl7" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.440356 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n"] Oct 07 13:08:28 crc kubenswrapper[5024]: E1007 13:08:28.440924 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49b99f2-b6f5-4046-8e7e-c933dd49411b" containerName="ssh-known-hosts-edpm-deployment" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.440951 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49b99f2-b6f5-4046-8e7e-c933dd49411b" containerName="ssh-known-hosts-edpm-deployment" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.441275 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49b99f2-b6f5-4046-8e7e-c933dd49411b" containerName="ssh-known-hosts-edpm-deployment" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.442415 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.447668 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.447969 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.447989 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.448049 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.448002 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.460990 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n"] Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.606062 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g79h2\" (UniqueName: \"kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.606159 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.607252 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.607529 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.709867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g79h2\" (UniqueName: \"kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.709923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.709968 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.710071 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.715966 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.717383 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.719257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.742028 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g79h2\" (UniqueName: \"kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zkt2n\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:28 crc kubenswrapper[5024]: I1007 13:08:28.765527 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:29 crc kubenswrapper[5024]: I1007 13:08:29.317031 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n"] Oct 07 13:08:29 crc kubenswrapper[5024]: W1007 13:08:29.324387 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c27b4b1_36a5_4e33_9098_35a524752868.slice/crio-1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce WatchSource:0}: Error finding container 1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce: Status 404 returned error can't find the container with id 1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce Oct 07 13:08:29 crc kubenswrapper[5024]: I1007 13:08:29.352986 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" event={"ID":"4c27b4b1-36a5-4e33-9098-35a524752868","Type":"ContainerStarted","Data":"1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce"} Oct 07 13:08:30 crc kubenswrapper[5024]: I1007 13:08:30.366742 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" event={"ID":"4c27b4b1-36a5-4e33-9098-35a524752868","Type":"ContainerStarted","Data":"57ac9a11bfe5fe69569cfa82768d4839a59c46e8c0eaef21b9ec5b8e99665c01"} Oct 07 13:08:30 crc kubenswrapper[5024]: I1007 13:08:30.392860 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" podStartSLOduration=1.711779081 podStartE2EDuration="2.392830408s" podCreationTimestamp="2025-10-07 13:08:28 +0000 UTC" firstStartedPulling="2025-10-07 13:08:29.328028574 +0000 UTC m=+2447.403815412" lastFinishedPulling="2025-10-07 13:08:30.009079871 +0000 UTC m=+2448.084866739" observedRunningTime="2025-10-07 13:08:30.39011488 +0000 UTC m=+2448.465901758" watchObservedRunningTime="2025-10-07 13:08:30.392830408 +0000 UTC m=+2448.468617266" Oct 07 13:08:39 crc kubenswrapper[5024]: I1007 13:08:39.470282 5024 generic.go:334] "Generic (PLEG): container finished" podID="4c27b4b1-36a5-4e33-9098-35a524752868" containerID="57ac9a11bfe5fe69569cfa82768d4839a59c46e8c0eaef21b9ec5b8e99665c01" exitCode=0 Oct 07 13:08:39 crc kubenswrapper[5024]: I1007 13:08:39.470349 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" event={"ID":"4c27b4b1-36a5-4e33-9098-35a524752868","Type":"ContainerDied","Data":"57ac9a11bfe5fe69569cfa82768d4839a59c46e8c0eaef21b9ec5b8e99665c01"} Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.057361 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.211680 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key\") pod \"4c27b4b1-36a5-4e33-9098-35a524752868\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.211802 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory\") pod \"4c27b4b1-36a5-4e33-9098-35a524752868\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.211902 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph\") pod \"4c27b4b1-36a5-4e33-9098-35a524752868\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.212111 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g79h2\" (UniqueName: \"kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2\") pod \"4c27b4b1-36a5-4e33-9098-35a524752868\" (UID: \"4c27b4b1-36a5-4e33-9098-35a524752868\") " Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.221913 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2" (OuterVolumeSpecName: "kube-api-access-g79h2") pod "4c27b4b1-36a5-4e33-9098-35a524752868" (UID: "4c27b4b1-36a5-4e33-9098-35a524752868"). InnerVolumeSpecName "kube-api-access-g79h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.222535 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph" (OuterVolumeSpecName: "ceph") pod "4c27b4b1-36a5-4e33-9098-35a524752868" (UID: "4c27b4b1-36a5-4e33-9098-35a524752868"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.263670 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4c27b4b1-36a5-4e33-9098-35a524752868" (UID: "4c27b4b1-36a5-4e33-9098-35a524752868"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.266776 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory" (OuterVolumeSpecName: "inventory") pod "4c27b4b1-36a5-4e33-9098-35a524752868" (UID: "4c27b4b1-36a5-4e33-9098-35a524752868"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.315228 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.315581 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g79h2\" (UniqueName: \"kubernetes.io/projected/4c27b4b1-36a5-4e33-9098-35a524752868-kube-api-access-g79h2\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.315751 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.315902 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c27b4b1-36a5-4e33-9098-35a524752868-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.502593 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" event={"ID":"4c27b4b1-36a5-4e33-9098-35a524752868","Type":"ContainerDied","Data":"1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce"} Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.502693 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa00ac3935219db93d59a7ba95e5a6c24c061e1d34df57b84ea16a33def3fce" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.502846 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zkt2n" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.582864 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp"] Oct 07 13:08:41 crc kubenswrapper[5024]: E1007 13:08:41.583428 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c27b4b1-36a5-4e33-9098-35a524752868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.583454 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c27b4b1-36a5-4e33-9098-35a524752868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.583625 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c27b4b1-36a5-4e33-9098-35a524752868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.584344 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.586966 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.586994 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.587961 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.588351 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.590314 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.594882 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp"] Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.725842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.725928 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zgn\" (UniqueName: \"kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.726057 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.726114 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.751611 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:08:41 crc kubenswrapper[5024]: E1007 13:08:41.751962 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.828973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.829380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.829579 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.829752 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zgn\" (UniqueName: \"kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.834631 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.834893 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.836827 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.855853 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zgn\" (UniqueName: \"kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:41 crc kubenswrapper[5024]: I1007 13:08:41.905184 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:42 crc kubenswrapper[5024]: I1007 13:08:42.274152 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp"] Oct 07 13:08:42 crc kubenswrapper[5024]: I1007 13:08:42.517914 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" event={"ID":"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9","Type":"ContainerStarted","Data":"a274c8cbf8ff328eaad0b265eb6c8104cef14c0c3ad5ebbc2c9581a67f0edf69"} Oct 07 13:08:42 crc kubenswrapper[5024]: I1007 13:08:42.863023 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:08:43 crc kubenswrapper[5024]: I1007 13:08:43.530336 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" event={"ID":"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9","Type":"ContainerStarted","Data":"12f242cb89aa08dac5463bb984e25337523de275e905021cadc91804e290e4f6"} Oct 07 13:08:43 crc kubenswrapper[5024]: I1007 13:08:43.557506 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" podStartSLOduration=1.979489735 podStartE2EDuration="2.557469308s" podCreationTimestamp="2025-10-07 13:08:41 +0000 UTC" firstStartedPulling="2025-10-07 13:08:42.282282375 +0000 UTC m=+2460.358069263" lastFinishedPulling="2025-10-07 13:08:42.860261998 +0000 UTC m=+2460.936048836" observedRunningTime="2025-10-07 13:08:43.550119347 +0000 UTC m=+2461.625906205" watchObservedRunningTime="2025-10-07 13:08:43.557469308 +0000 UTC m=+2461.633256186" Oct 07 13:08:53 crc kubenswrapper[5024]: I1007 13:08:53.639994 5024 generic.go:334] "Generic (PLEG): container finished" podID="a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" containerID="12f242cb89aa08dac5463bb984e25337523de275e905021cadc91804e290e4f6" exitCode=0 Oct 07 13:08:53 crc kubenswrapper[5024]: I1007 13:08:53.640256 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" event={"ID":"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9","Type":"ContainerDied","Data":"12f242cb89aa08dac5463bb984e25337523de275e905021cadc91804e290e4f6"} Oct 07 13:08:54 crc kubenswrapper[5024]: I1007 13:08:54.765989 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:08:54 crc kubenswrapper[5024]: E1007 13:08:54.766542 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.182485 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.282446 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7zgn\" (UniqueName: \"kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn\") pod \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.282750 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph\") pod \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.282809 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key\") pod \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.282985 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory\") pod \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\" (UID: \"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9\") " Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.290388 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn" (OuterVolumeSpecName: "kube-api-access-j7zgn") pod "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" (UID: "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9"). InnerVolumeSpecName "kube-api-access-j7zgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.290914 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph" (OuterVolumeSpecName: "ceph") pod "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" (UID: "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.314067 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" (UID: "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.333521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory" (OuterVolumeSpecName: "inventory") pod "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" (UID: "a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.386302 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.386351 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.386369 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.386383 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7zgn\" (UniqueName: \"kubernetes.io/projected/a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9-kube-api-access-j7zgn\") on node \"crc\" DevicePath \"\"" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.662197 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" event={"ID":"a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9","Type":"ContainerDied","Data":"a274c8cbf8ff328eaad0b265eb6c8104cef14c0c3ad5ebbc2c9581a67f0edf69"} Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.662248 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a274c8cbf8ff328eaad0b265eb6c8104cef14c0c3ad5ebbc2c9581a67f0edf69" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.662281 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.768570 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd"] Oct 07 13:08:55 crc kubenswrapper[5024]: E1007 13:08:55.769123 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.769197 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.769424 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.770256 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.784742 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd"] Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826159 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826234 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826335 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826415 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826644 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826791 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826870 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.826952 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941265 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941569 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22qc\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941688 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.941837 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942158 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942297 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942567 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942836 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:55 crc kubenswrapper[5024]: I1007 13:08:55.942933 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.044953 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045059 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045259 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m22qc\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045364 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045394 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045439 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.045717 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.053044 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.054802 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.055020 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.055174 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.056533 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.058047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.060001 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.060772 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.064698 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.081778 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.083403 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.086185 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.088101 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m22qc\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.162774 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:08:56 crc kubenswrapper[5024]: I1007 13:08:56.773081 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd"] Oct 07 13:08:57 crc kubenswrapper[5024]: I1007 13:08:57.684665 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" event={"ID":"989605a8-0e53-4a59-9c1f-d927baa38ca6","Type":"ContainerStarted","Data":"1bf70130b38c6b9540661fd70c0df8a3260ffb83d339813e0c896062f952a9e2"} Oct 07 13:08:57 crc kubenswrapper[5024]: I1007 13:08:57.685012 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" event={"ID":"989605a8-0e53-4a59-9c1f-d927baa38ca6","Type":"ContainerStarted","Data":"df0b72de06eb788b8ba45f05a59824fbbce4de247ad801ecc94ef1e9423f2e68"} Oct 07 13:08:57 crc kubenswrapper[5024]: I1007 13:08:57.715408 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" podStartSLOduration=2.234777953 podStartE2EDuration="2.715373134s" podCreationTimestamp="2025-10-07 13:08:55 +0000 UTC" firstStartedPulling="2025-10-07 13:08:56.775106583 +0000 UTC m=+2474.850893421" lastFinishedPulling="2025-10-07 13:08:57.255701754 +0000 UTC m=+2475.331488602" observedRunningTime="2025-10-07 13:08:57.704718287 +0000 UTC m=+2475.780505135" watchObservedRunningTime="2025-10-07 13:08:57.715373134 +0000 UTC m=+2475.791159972" Oct 07 13:09:05 crc kubenswrapper[5024]: I1007 13:09:05.752204 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:09:05 crc kubenswrapper[5024]: E1007 13:09:05.753664 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:09:17 crc kubenswrapper[5024]: I1007 13:09:17.752278 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:09:17 crc kubenswrapper[5024]: E1007 13:09:17.753465 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:09:28 crc kubenswrapper[5024]: I1007 13:09:28.752418 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:09:28 crc kubenswrapper[5024]: E1007 13:09:28.753249 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:09:35 crc kubenswrapper[5024]: I1007 13:09:35.110180 5024 generic.go:334] "Generic (PLEG): container finished" podID="989605a8-0e53-4a59-9c1f-d927baa38ca6" containerID="1bf70130b38c6b9540661fd70c0df8a3260ffb83d339813e0c896062f952a9e2" exitCode=0 Oct 07 13:09:35 crc kubenswrapper[5024]: I1007 13:09:35.110470 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" event={"ID":"989605a8-0e53-4a59-9c1f-d927baa38ca6","Type":"ContainerDied","Data":"1bf70130b38c6b9540661fd70c0df8a3260ffb83d339813e0c896062f952a9e2"} Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.627314 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755482 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755671 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755741 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755842 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755891 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755936 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.755984 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m22qc\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756060 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756127 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756217 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756262 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756548 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.756698 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle\") pod \"989605a8-0e53-4a59-9c1f-d927baa38ca6\" (UID: \"989605a8-0e53-4a59-9c1f-d927baa38ca6\") " Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.763647 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.763912 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.763916 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph" (OuterVolumeSpecName: "ceph") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.764473 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.766015 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.766384 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.768868 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.769737 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.771307 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.772367 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc" (OuterVolumeSpecName: "kube-api-access-m22qc") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "kube-api-access-m22qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.772745 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.797593 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.799434 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory" (OuterVolumeSpecName: "inventory") pod "989605a8-0e53-4a59-9c1f-d927baa38ca6" (UID: "989605a8-0e53-4a59-9c1f-d927baa38ca6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.859936 5024 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.859993 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860009 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860024 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860040 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860055 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860070 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860085 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m22qc\" (UniqueName: \"kubernetes.io/projected/989605a8-0e53-4a59-9c1f-d927baa38ca6-kube-api-access-m22qc\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860097 5024 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860109 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860120 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860156 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:36 crc kubenswrapper[5024]: I1007 13:09:36.860168 5024 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989605a8-0e53-4a59-9c1f-d927baa38ca6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.169541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" event={"ID":"989605a8-0e53-4a59-9c1f-d927baa38ca6","Type":"ContainerDied","Data":"df0b72de06eb788b8ba45f05a59824fbbce4de247ad801ecc94ef1e9423f2e68"} Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.169596 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0b72de06eb788b8ba45f05a59824fbbce4de247ad801ecc94ef1e9423f2e68" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.169672 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.350155 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c"] Oct 07 13:09:37 crc kubenswrapper[5024]: E1007 13:09:37.350844 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989605a8-0e53-4a59-9c1f-d927baa38ca6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.350870 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="989605a8-0e53-4a59-9c1f-d927baa38ca6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.351094 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="989605a8-0e53-4a59-9c1f-d927baa38ca6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.352086 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.354941 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.355040 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.355251 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.362735 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c"] Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.367509 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.367519 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.490854 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.491307 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bb9\" (UniqueName: \"kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.491542 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.491683 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.594749 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.594966 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48bb9\" (UniqueName: \"kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.595072 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.595215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.602634 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.602806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.602812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.627694 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48bb9\" (UniqueName: \"kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:37 crc kubenswrapper[5024]: I1007 13:09:37.705731 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:38 crc kubenswrapper[5024]: I1007 13:09:38.306113 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c"] Oct 07 13:09:39 crc kubenswrapper[5024]: I1007 13:09:39.193200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" event={"ID":"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033","Type":"ContainerStarted","Data":"b6f6a282b0dbba58263d80c2a06d707a166c24fa7b4b77320c012cb0fdfff740"} Oct 07 13:09:39 crc kubenswrapper[5024]: I1007 13:09:39.193742 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" event={"ID":"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033","Type":"ContainerStarted","Data":"37b815dc180dd3634a50802f66b587443a055d423619867407f495eae6293294"} Oct 07 13:09:39 crc kubenswrapper[5024]: I1007 13:09:39.221055 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" podStartSLOduration=1.6924272710000001 podStartE2EDuration="2.221027974s" podCreationTimestamp="2025-10-07 13:09:37 +0000 UTC" firstStartedPulling="2025-10-07 13:09:38.316354845 +0000 UTC m=+2516.392141693" lastFinishedPulling="2025-10-07 13:09:38.844955548 +0000 UTC m=+2516.920742396" observedRunningTime="2025-10-07 13:09:39.212932921 +0000 UTC m=+2517.288719759" watchObservedRunningTime="2025-10-07 13:09:39.221027974 +0000 UTC m=+2517.296814822" Oct 07 13:09:39 crc kubenswrapper[5024]: I1007 13:09:39.752765 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:09:39 crc kubenswrapper[5024]: E1007 13:09:39.753104 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:09:46 crc kubenswrapper[5024]: I1007 13:09:46.278533 5024 generic.go:334] "Generic (PLEG): container finished" podID="ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" containerID="b6f6a282b0dbba58263d80c2a06d707a166c24fa7b4b77320c012cb0fdfff740" exitCode=0 Oct 07 13:09:46 crc kubenswrapper[5024]: I1007 13:09:46.278579 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" event={"ID":"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033","Type":"ContainerDied","Data":"b6f6a282b0dbba58263d80c2a06d707a166c24fa7b4b77320c012cb0fdfff740"} Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.795789 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.887284 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48bb9\" (UniqueName: \"kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9\") pod \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.887378 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory\") pod \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.887480 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key\") pod \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.887560 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph\") pod \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\" (UID: \"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033\") " Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.896554 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph" (OuterVolumeSpecName: "ceph") pod "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" (UID: "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.898736 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9" (OuterVolumeSpecName: "kube-api-access-48bb9") pod "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" (UID: "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033"). InnerVolumeSpecName "kube-api-access-48bb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.929774 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" (UID: "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.936866 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory" (OuterVolumeSpecName: "inventory") pod "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" (UID: "ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.990763 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48bb9\" (UniqueName: \"kubernetes.io/projected/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-kube-api-access-48bb9\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.990813 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.990828 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:47 crc kubenswrapper[5024]: I1007 13:09:47.990841 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.303929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" event={"ID":"ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033","Type":"ContainerDied","Data":"37b815dc180dd3634a50802f66b587443a055d423619867407f495eae6293294"} Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.303976 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b815dc180dd3634a50802f66b587443a055d423619867407f495eae6293294" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.304002 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.435162 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx"] Oct 07 13:09:48 crc kubenswrapper[5024]: E1007 13:09:48.435601 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.435624 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.435845 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.436799 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.442244 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.442322 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.442398 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.451349 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx"] Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.452102 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.452492 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.452615 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506594 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506656 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506686 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506764 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.506822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2p6\" (UniqueName: \"kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2p6\" (UniqueName: \"kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609562 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609711 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609750 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.609856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.611284 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.615744 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.615918 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.616788 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.620709 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.629413 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2p6\" (UniqueName: \"kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jqrtx\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:48 crc kubenswrapper[5024]: I1007 13:09:48.761430 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:09:49 crc kubenswrapper[5024]: I1007 13:09:49.156700 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx"] Oct 07 13:09:49 crc kubenswrapper[5024]: W1007 13:09:49.165492 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b13355_d69c_4b53_972f_5d7014d5a81c.slice/crio-386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b WatchSource:0}: Error finding container 386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b: Status 404 returned error can't find the container with id 386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b Oct 07 13:09:49 crc kubenswrapper[5024]: I1007 13:09:49.318325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" event={"ID":"a9b13355-d69c-4b53-972f-5d7014d5a81c","Type":"ContainerStarted","Data":"386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b"} Oct 07 13:09:50 crc kubenswrapper[5024]: I1007 13:09:50.330647 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" event={"ID":"a9b13355-d69c-4b53-972f-5d7014d5a81c","Type":"ContainerStarted","Data":"f905759f0e1206f0004af8caa8e59f1852a47149aeaa152d21610dfb242eac08"} Oct 07 13:09:50 crc kubenswrapper[5024]: I1007 13:09:50.354796 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" podStartSLOduration=1.717313094 podStartE2EDuration="2.354769247s" podCreationTimestamp="2025-10-07 13:09:48 +0000 UTC" firstStartedPulling="2025-10-07 13:09:49.168775688 +0000 UTC m=+2527.244562526" lastFinishedPulling="2025-10-07 13:09:49.806231841 +0000 UTC m=+2527.882018679" observedRunningTime="2025-10-07 13:09:50.353888941 +0000 UTC m=+2528.429675809" watchObservedRunningTime="2025-10-07 13:09:50.354769247 +0000 UTC m=+2528.430556115" Oct 07 13:09:52 crc kubenswrapper[5024]: I1007 13:09:52.761949 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:09:52 crc kubenswrapper[5024]: E1007 13:09:52.762608 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:10:04 crc kubenswrapper[5024]: I1007 13:10:04.751973 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:10:04 crc kubenswrapper[5024]: E1007 13:10:04.753561 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:10:17 crc kubenswrapper[5024]: I1007 13:10:17.751471 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:10:17 crc kubenswrapper[5024]: E1007 13:10:17.752211 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:10:29 crc kubenswrapper[5024]: I1007 13:10:29.752130 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:10:29 crc kubenswrapper[5024]: E1007 13:10:29.753610 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:10:42 crc kubenswrapper[5024]: I1007 13:10:42.773737 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:10:42 crc kubenswrapper[5024]: E1007 13:10:42.775263 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:10:56 crc kubenswrapper[5024]: I1007 13:10:56.753709 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:10:56 crc kubenswrapper[5024]: E1007 13:10:56.755515 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:11:08 crc kubenswrapper[5024]: I1007 13:11:08.752409 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:11:08 crc kubenswrapper[5024]: E1007 13:11:08.753888 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:11:12 crc kubenswrapper[5024]: I1007 13:11:12.367051 5024 generic.go:334] "Generic (PLEG): container finished" podID="a9b13355-d69c-4b53-972f-5d7014d5a81c" containerID="f905759f0e1206f0004af8caa8e59f1852a47149aeaa152d21610dfb242eac08" exitCode=0 Oct 07 13:11:12 crc kubenswrapper[5024]: I1007 13:11:12.367194 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" event={"ID":"a9b13355-d69c-4b53-972f-5d7014d5a81c","Type":"ContainerDied","Data":"f905759f0e1206f0004af8caa8e59f1852a47149aeaa152d21610dfb242eac08"} Oct 07 13:11:13 crc kubenswrapper[5024]: I1007 13:11:13.922123 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.119968 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.120010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2p6\" (UniqueName: \"kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.120184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.120308 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.120365 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.120462 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle\") pod \"a9b13355-d69c-4b53-972f-5d7014d5a81c\" (UID: \"a9b13355-d69c-4b53-972f-5d7014d5a81c\") " Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.127659 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.128843 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6" (OuterVolumeSpecName: "kube-api-access-8s2p6") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "kube-api-access-8s2p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.130076 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph" (OuterVolumeSpecName: "ceph") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.147930 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.155747 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory" (OuterVolumeSpecName: "inventory") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.167380 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a9b13355-d69c-4b53-972f-5d7014d5a81c" (UID: "a9b13355-d69c-4b53-972f-5d7014d5a81c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227226 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227288 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227301 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227317 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227336 5024 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a9b13355-d69c-4b53-972f-5d7014d5a81c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.227348 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2p6\" (UniqueName: \"kubernetes.io/projected/a9b13355-d69c-4b53-972f-5d7014d5a81c-kube-api-access-8s2p6\") on node \"crc\" DevicePath \"\"" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.398970 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" event={"ID":"a9b13355-d69c-4b53-972f-5d7014d5a81c","Type":"ContainerDied","Data":"386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b"} Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.399418 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386ac18be6db827815f515476c49432cf7ed4d0062a0d8788a84cc2f841cda5b" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.399023 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jqrtx" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.528184 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4"] Oct 07 13:11:14 crc kubenswrapper[5024]: E1007 13:11:14.528741 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b13355-d69c-4b53-972f-5d7014d5a81c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.528772 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b13355-d69c-4b53-972f-5d7014d5a81c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.529765 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b13355-d69c-4b53-972f-5d7014d5a81c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.530728 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.533229 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.534318 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.537321 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.537670 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.538060 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.538443 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.538626 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.547499 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4"] Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643216 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643461 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643494 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643518 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wtnk\" (UniqueName: \"kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643779 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643878 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.643906 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745657 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wtnk\" (UniqueName: \"kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745816 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745847 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.745883 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.752092 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.753500 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.754105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.754561 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.757084 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.759727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.780489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wtnk\" (UniqueName: \"kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:14 crc kubenswrapper[5024]: I1007 13:11:14.870290 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:11:15 crc kubenswrapper[5024]: I1007 13:11:15.248699 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4"] Oct 07 13:11:15 crc kubenswrapper[5024]: I1007 13:11:15.416745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" event={"ID":"d967b331-7a34-4912-b2f8-93187b6d1c2e","Type":"ContainerStarted","Data":"7db411b38b07ef8e0fab4db1a700ade1070f3806ac3240db8b304f3929947ab3"} Oct 07 13:11:16 crc kubenswrapper[5024]: I1007 13:11:16.431214 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" event={"ID":"d967b331-7a34-4912-b2f8-93187b6d1c2e","Type":"ContainerStarted","Data":"d09ab75cdc7a7395d8d8b8114504e39d8257d0c6f69e6435a7f37b493bfe5f11"} Oct 07 13:11:16 crc kubenswrapper[5024]: I1007 13:11:16.479118 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" podStartSLOduration=1.937942568 podStartE2EDuration="2.479088872s" podCreationTimestamp="2025-10-07 13:11:14 +0000 UTC" firstStartedPulling="2025-10-07 13:11:15.258496578 +0000 UTC m=+2613.334283456" lastFinishedPulling="2025-10-07 13:11:15.799642912 +0000 UTC m=+2613.875429760" observedRunningTime="2025-10-07 13:11:16.462634559 +0000 UTC m=+2614.538421497" watchObservedRunningTime="2025-10-07 13:11:16.479088872 +0000 UTC m=+2614.554875740" Oct 07 13:11:19 crc kubenswrapper[5024]: I1007 13:11:19.752233 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:11:19 crc kubenswrapper[5024]: E1007 13:11:19.753354 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:11:31 crc kubenswrapper[5024]: I1007 13:11:31.753970 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:11:31 crc kubenswrapper[5024]: E1007 13:11:31.755002 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:11:43 crc kubenswrapper[5024]: I1007 13:11:43.752110 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:11:44 crc kubenswrapper[5024]: I1007 13:11:44.809580 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d"} Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.025371 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.028940 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.033994 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.083650 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.083843 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9fv\" (UniqueName: \"kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.083916 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.185587 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.186048 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9fv\" (UniqueName: \"kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.186109 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.186387 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.186681 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.212214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9fv\" (UniqueName: \"kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv\") pod \"redhat-operators-j9pw2\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.366614 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:48 crc kubenswrapper[5024]: I1007 13:11:48.905269 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:11:49 crc kubenswrapper[5024]: I1007 13:11:49.861900 5024 generic.go:334] "Generic (PLEG): container finished" podID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerID="0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374" exitCode=0 Oct 07 13:11:49 crc kubenswrapper[5024]: I1007 13:11:49.861995 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerDied","Data":"0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374"} Oct 07 13:11:49 crc kubenswrapper[5024]: I1007 13:11:49.862352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerStarted","Data":"c5b5889faed1b279a9c5ceb9e9e6e9f31246677a9107312beb010f14bd91ffd2"} Oct 07 13:11:53 crc kubenswrapper[5024]: I1007 13:11:53.924374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerStarted","Data":"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941"} Oct 07 13:11:54 crc kubenswrapper[5024]: I1007 13:11:54.936591 5024 generic.go:334] "Generic (PLEG): container finished" podID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerID="6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941" exitCode=0 Oct 07 13:11:54 crc kubenswrapper[5024]: I1007 13:11:54.936678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerDied","Data":"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941"} Oct 07 13:11:54 crc kubenswrapper[5024]: I1007 13:11:54.942479 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:11:56 crc kubenswrapper[5024]: I1007 13:11:56.965917 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerStarted","Data":"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91"} Oct 07 13:11:58 crc kubenswrapper[5024]: I1007 13:11:58.008277 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9pw2" podStartSLOduration=4.36953335 podStartE2EDuration="11.00824951s" podCreationTimestamp="2025-10-07 13:11:47 +0000 UTC" firstStartedPulling="2025-10-07 13:11:49.866224425 +0000 UTC m=+2647.942011263" lastFinishedPulling="2025-10-07 13:11:56.504940575 +0000 UTC m=+2654.580727423" observedRunningTime="2025-10-07 13:11:57.996469371 +0000 UTC m=+2656.072256219" watchObservedRunningTime="2025-10-07 13:11:58.00824951 +0000 UTC m=+2656.084036358" Oct 07 13:11:58 crc kubenswrapper[5024]: I1007 13:11:58.367890 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:58 crc kubenswrapper[5024]: I1007 13:11:58.367956 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:11:59 crc kubenswrapper[5024]: I1007 13:11:59.433318 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9pw2" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="registry-server" probeResult="failure" output=< Oct 07 13:11:59 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:11:59 crc kubenswrapper[5024]: > Oct 07 13:12:08 crc kubenswrapper[5024]: I1007 13:12:08.423773 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:12:08 crc kubenswrapper[5024]: I1007 13:12:08.484309 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:12:08 crc kubenswrapper[5024]: I1007 13:12:08.668697 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.098063 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9pw2" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="registry-server" containerID="cri-o://7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91" gracePeriod=2 Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.648383 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.722964 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content\") pod \"d15cc3a5-5e47-4144-88e7-c5847190acb8\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.723116 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm9fv\" (UniqueName: \"kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv\") pod \"d15cc3a5-5e47-4144-88e7-c5847190acb8\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.723265 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities\") pod \"d15cc3a5-5e47-4144-88e7-c5847190acb8\" (UID: \"d15cc3a5-5e47-4144-88e7-c5847190acb8\") " Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.724307 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities" (OuterVolumeSpecName: "utilities") pod "d15cc3a5-5e47-4144-88e7-c5847190acb8" (UID: "d15cc3a5-5e47-4144-88e7-c5847190acb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.733350 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv" (OuterVolumeSpecName: "kube-api-access-qm9fv") pod "d15cc3a5-5e47-4144-88e7-c5847190acb8" (UID: "d15cc3a5-5e47-4144-88e7-c5847190acb8"). InnerVolumeSpecName "kube-api-access-qm9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.803294 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d15cc3a5-5e47-4144-88e7-c5847190acb8" (UID: "d15cc3a5-5e47-4144-88e7-c5847190acb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.825708 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm9fv\" (UniqueName: \"kubernetes.io/projected/d15cc3a5-5e47-4144-88e7-c5847190acb8-kube-api-access-qm9fv\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.825949 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:10 crc kubenswrapper[5024]: I1007 13:12:10.825979 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15cc3a5-5e47-4144-88e7-c5847190acb8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.113672 5024 generic.go:334] "Generic (PLEG): container finished" podID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerID="7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91" exitCode=0 Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.113729 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerDied","Data":"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91"} Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.113755 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9pw2" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.113794 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9pw2" event={"ID":"d15cc3a5-5e47-4144-88e7-c5847190acb8","Type":"ContainerDied","Data":"c5b5889faed1b279a9c5ceb9e9e6e9f31246677a9107312beb010f14bd91ffd2"} Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.113814 5024 scope.go:117] "RemoveContainer" containerID="7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.147824 5024 scope.go:117] "RemoveContainer" containerID="6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.174618 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.189210 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9pw2"] Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.197811 5024 scope.go:117] "RemoveContainer" containerID="0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.254759 5024 scope.go:117] "RemoveContainer" containerID="7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91" Oct 07 13:12:11 crc kubenswrapper[5024]: E1007 13:12:11.255666 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91\": container with ID starting with 7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91 not found: ID does not exist" containerID="7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.255711 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91"} err="failed to get container status \"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91\": rpc error: code = NotFound desc = could not find container \"7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91\": container with ID starting with 7586c1b3e48bfba6f8469d0078a6c36cf1681f1101a9df445cc1be8c0f3a0b91 not found: ID does not exist" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.255737 5024 scope.go:117] "RemoveContainer" containerID="6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941" Oct 07 13:12:11 crc kubenswrapper[5024]: E1007 13:12:11.256440 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941\": container with ID starting with 6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941 not found: ID does not exist" containerID="6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.256463 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941"} err="failed to get container status \"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941\": rpc error: code = NotFound desc = could not find container \"6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941\": container with ID starting with 6649288d2cba7031c65ccc950845a8739328889ff6ddc70dd376cc18da4c0941 not found: ID does not exist" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.256480 5024 scope.go:117] "RemoveContainer" containerID="0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374" Oct 07 13:12:11 crc kubenswrapper[5024]: E1007 13:12:11.256815 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374\": container with ID starting with 0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374 not found: ID does not exist" containerID="0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374" Oct 07 13:12:11 crc kubenswrapper[5024]: I1007 13:12:11.256840 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374"} err="failed to get container status \"0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374\": rpc error: code = NotFound desc = could not find container \"0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374\": container with ID starting with 0d17e6410d8314b69743315e116f47d093389b70ba86c5397ba46df030e26374 not found: ID does not exist" Oct 07 13:12:12 crc kubenswrapper[5024]: I1007 13:12:12.768017 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" path="/var/lib/kubelet/pods/d15cc3a5-5e47-4144-88e7-c5847190acb8/volumes" Oct 07 13:12:25 crc kubenswrapper[5024]: I1007 13:12:25.274409 5024 generic.go:334] "Generic (PLEG): container finished" podID="d967b331-7a34-4912-b2f8-93187b6d1c2e" containerID="d09ab75cdc7a7395d8d8b8114504e39d8257d0c6f69e6435a7f37b493bfe5f11" exitCode=0 Oct 07 13:12:25 crc kubenswrapper[5024]: I1007 13:12:25.274504 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" event={"ID":"d967b331-7a34-4912-b2f8-93187b6d1c2e","Type":"ContainerDied","Data":"d09ab75cdc7a7395d8d8b8114504e39d8257d0c6f69e6435a7f37b493bfe5f11"} Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.746976 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.779741 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.780258 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.780443 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.780790 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wtnk\" (UniqueName: \"kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.781058 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.781278 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.781457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle\") pod \"d967b331-7a34-4912-b2f8-93187b6d1c2e\" (UID: \"d967b331-7a34-4912-b2f8-93187b6d1c2e\") " Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.800313 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk" (OuterVolumeSpecName: "kube-api-access-7wtnk") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "kube-api-access-7wtnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.800440 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.801459 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph" (OuterVolumeSpecName: "ceph") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.814644 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.827541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory" (OuterVolumeSpecName: "inventory") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.827752 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.835702 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d967b331-7a34-4912-b2f8-93187b6d1c2e" (UID: "d967b331-7a34-4912-b2f8-93187b6d1c2e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885411 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wtnk\" (UniqueName: \"kubernetes.io/projected/d967b331-7a34-4912-b2f8-93187b6d1c2e-kube-api-access-7wtnk\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885455 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885488 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885504 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885516 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885525 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:26 crc kubenswrapper[5024]: I1007 13:12:26.885535 5024 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d967b331-7a34-4912-b2f8-93187b6d1c2e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.300383 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" event={"ID":"d967b331-7a34-4912-b2f8-93187b6d1c2e","Type":"ContainerDied","Data":"7db411b38b07ef8e0fab4db1a700ade1070f3806ac3240db8b304f3929947ab3"} Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.300497 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.300533 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db411b38b07ef8e0fab4db1a700ade1070f3806ac3240db8b304f3929947ab3" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.564503 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f"] Oct 07 13:12:27 crc kubenswrapper[5024]: E1007 13:12:27.565130 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="extract-content" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565185 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="extract-content" Oct 07 13:12:27 crc kubenswrapper[5024]: E1007 13:12:27.565216 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="extract-utilities" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565232 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="extract-utilities" Oct 07 13:12:27 crc kubenswrapper[5024]: E1007 13:12:27.565249 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="registry-server" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565262 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="registry-server" Oct 07 13:12:27 crc kubenswrapper[5024]: E1007 13:12:27.565294 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d967b331-7a34-4912-b2f8-93187b6d1c2e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565309 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d967b331-7a34-4912-b2f8-93187b6d1c2e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565656 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d967b331-7a34-4912-b2f8-93187b6d1c2e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.565709 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15cc3a5-5e47-4144-88e7-c5847190acb8" containerName="registry-server" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.566841 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.569062 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.572578 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.572625 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.572870 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.572671 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.573196 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.578470 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f"] Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.599833 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.599906 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.600131 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.600231 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.600405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcl9p\" (UniqueName: \"kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.600588 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703378 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703747 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703808 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703860 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcl9p\" (UniqueName: \"kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.703941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.708606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.709640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.711519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.711964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.716664 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.721990 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcl9p\" (UniqueName: \"kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:27 crc kubenswrapper[5024]: I1007 13:12:27.895083 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:12:28 crc kubenswrapper[5024]: I1007 13:12:28.631266 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f"] Oct 07 13:12:29 crc kubenswrapper[5024]: I1007 13:12:29.324668 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" event={"ID":"a204d343-554a-475d-8b5b-dc0a7d5a09c9","Type":"ContainerStarted","Data":"4a0d88e88b303a998a404ae160e2ed970da360e72ba51f76c32c5d096b46d500"} Oct 07 13:12:30 crc kubenswrapper[5024]: I1007 13:12:30.334980 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" event={"ID":"a204d343-554a-475d-8b5b-dc0a7d5a09c9","Type":"ContainerStarted","Data":"91d79fcaa356f8527679ba0a58e606ade87251dd7d7556be01c4f14c1b165bbd"} Oct 07 13:12:30 crc kubenswrapper[5024]: I1007 13:12:30.360236 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" podStartSLOduration=2.612030019 podStartE2EDuration="3.360212385s" podCreationTimestamp="2025-10-07 13:12:27 +0000 UTC" firstStartedPulling="2025-10-07 13:12:28.640901025 +0000 UTC m=+2686.716687863" lastFinishedPulling="2025-10-07 13:12:29.389083351 +0000 UTC m=+2687.464870229" observedRunningTime="2025-10-07 13:12:30.35969293 +0000 UTC m=+2688.435479808" watchObservedRunningTime="2025-10-07 13:12:30.360212385 +0000 UTC m=+2688.435999263" Oct 07 13:13:13 crc kubenswrapper[5024]: I1007 13:13:13.900166 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:13 crc kubenswrapper[5024]: I1007 13:13:13.903622 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:13 crc kubenswrapper[5024]: I1007 13:13:13.928547 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.021001 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.021072 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpckm\" (UniqueName: \"kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.021441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.123066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.123644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpckm\" (UniqueName: \"kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.123688 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.124041 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.124426 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.162634 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpckm\" (UniqueName: \"kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm\") pod \"redhat-marketplace-xrg5q\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.231328 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.730373 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:14 crc kubenswrapper[5024]: I1007 13:13:14.812450 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerStarted","Data":"003bf3522c2ca18fec3c4f82aa999ba74c73d13cd67051c7cc5a40bcfb9c4aa2"} Oct 07 13:13:15 crc kubenswrapper[5024]: I1007 13:13:15.829015 5024 generic.go:334] "Generic (PLEG): container finished" podID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerID="deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f" exitCode=0 Oct 07 13:13:15 crc kubenswrapper[5024]: I1007 13:13:15.829089 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerDied","Data":"deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f"} Oct 07 13:13:16 crc kubenswrapper[5024]: I1007 13:13:16.847864 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerStarted","Data":"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885"} Oct 07 13:13:17 crc kubenswrapper[5024]: I1007 13:13:17.863257 5024 generic.go:334] "Generic (PLEG): container finished" podID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerID="3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885" exitCode=0 Oct 07 13:13:17 crc kubenswrapper[5024]: I1007 13:13:17.863414 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerDied","Data":"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885"} Oct 07 13:13:19 crc kubenswrapper[5024]: I1007 13:13:19.885929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerStarted","Data":"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473"} Oct 07 13:13:19 crc kubenswrapper[5024]: I1007 13:13:19.928581 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrg5q" podStartSLOduration=3.501507165 podStartE2EDuration="6.928560072s" podCreationTimestamp="2025-10-07 13:13:13 +0000 UTC" firstStartedPulling="2025-10-07 13:13:15.832570305 +0000 UTC m=+2733.908357143" lastFinishedPulling="2025-10-07 13:13:19.259623202 +0000 UTC m=+2737.335410050" observedRunningTime="2025-10-07 13:13:19.913892969 +0000 UTC m=+2737.989679807" watchObservedRunningTime="2025-10-07 13:13:19.928560072 +0000 UTC m=+2738.004346920" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.513890 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.516253 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.527015 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.676217 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.676701 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wql\" (UniqueName: \"kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.676751 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.778901 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wql\" (UniqueName: \"kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.778970 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.779101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.779689 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.779795 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.800462 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wql\" (UniqueName: \"kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql\") pod \"certified-operators-nll56\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:20 crc kubenswrapper[5024]: I1007 13:13:20.847860 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:21 crc kubenswrapper[5024]: I1007 13:13:21.434394 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:21 crc kubenswrapper[5024]: I1007 13:13:21.907601 5024 generic.go:334] "Generic (PLEG): container finished" podID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerID="f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231" exitCode=0 Oct 07 13:13:21 crc kubenswrapper[5024]: I1007 13:13:21.907695 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerDied","Data":"f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231"} Oct 07 13:13:21 crc kubenswrapper[5024]: I1007 13:13:21.907752 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerStarted","Data":"357c19680faa83f1d92a684ed774d8a196d562ced14144436b40d7c9923f4a56"} Oct 07 13:13:23 crc kubenswrapper[5024]: I1007 13:13:23.935230 5024 generic.go:334] "Generic (PLEG): container finished" podID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerID="9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce" exitCode=0 Oct 07 13:13:23 crc kubenswrapper[5024]: I1007 13:13:23.935334 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerDied","Data":"9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce"} Oct 07 13:13:24 crc kubenswrapper[5024]: I1007 13:13:24.232378 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:24 crc kubenswrapper[5024]: I1007 13:13:24.232431 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:24 crc kubenswrapper[5024]: I1007 13:13:24.280689 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:24 crc kubenswrapper[5024]: I1007 13:13:24.953462 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerStarted","Data":"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c"} Oct 07 13:13:24 crc kubenswrapper[5024]: I1007 13:13:24.992458 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nll56" podStartSLOduration=2.384611728 podStartE2EDuration="4.992436221s" podCreationTimestamp="2025-10-07 13:13:20 +0000 UTC" firstStartedPulling="2025-10-07 13:13:21.909801446 +0000 UTC m=+2739.985588284" lastFinishedPulling="2025-10-07 13:13:24.517625929 +0000 UTC m=+2742.593412777" observedRunningTime="2025-10-07 13:13:24.986404737 +0000 UTC m=+2743.062191615" watchObservedRunningTime="2025-10-07 13:13:24.992436221 +0000 UTC m=+2743.068223069" Oct 07 13:13:25 crc kubenswrapper[5024]: I1007 13:13:25.020218 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:25 crc kubenswrapper[5024]: I1007 13:13:25.878226 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:26 crc kubenswrapper[5024]: I1007 13:13:26.979342 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrg5q" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="registry-server" containerID="cri-o://7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473" gracePeriod=2 Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.510718 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.650472 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpckm\" (UniqueName: \"kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm\") pod \"86d168e8-a919-401b-9ced-bfa6460bbb0a\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.650586 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content\") pod \"86d168e8-a919-401b-9ced-bfa6460bbb0a\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.650672 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities\") pod \"86d168e8-a919-401b-9ced-bfa6460bbb0a\" (UID: \"86d168e8-a919-401b-9ced-bfa6460bbb0a\") " Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.654378 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities" (OuterVolumeSpecName: "utilities") pod "86d168e8-a919-401b-9ced-bfa6460bbb0a" (UID: "86d168e8-a919-401b-9ced-bfa6460bbb0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.660940 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm" (OuterVolumeSpecName: "kube-api-access-gpckm") pod "86d168e8-a919-401b-9ced-bfa6460bbb0a" (UID: "86d168e8-a919-401b-9ced-bfa6460bbb0a"). InnerVolumeSpecName "kube-api-access-gpckm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.754157 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpckm\" (UniqueName: \"kubernetes.io/projected/86d168e8-a919-401b-9ced-bfa6460bbb0a-kube-api-access-gpckm\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.754506 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.830718 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86d168e8-a919-401b-9ced-bfa6460bbb0a" (UID: "86d168e8-a919-401b-9ced-bfa6460bbb0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:13:27 crc kubenswrapper[5024]: I1007 13:13:27.857668 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d168e8-a919-401b-9ced-bfa6460bbb0a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.004118 5024 generic.go:334] "Generic (PLEG): container finished" podID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerID="7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473" exitCode=0 Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.004575 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerDied","Data":"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473"} Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.004620 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrg5q" event={"ID":"86d168e8-a919-401b-9ced-bfa6460bbb0a","Type":"ContainerDied","Data":"003bf3522c2ca18fec3c4f82aa999ba74c73d13cd67051c7cc5a40bcfb9c4aa2"} Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.004652 5024 scope.go:117] "RemoveContainer" containerID="7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.004877 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrg5q" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.048687 5024 scope.go:117] "RemoveContainer" containerID="3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.078793 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.094572 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrg5q"] Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.098949 5024 scope.go:117] "RemoveContainer" containerID="deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.152240 5024 scope.go:117] "RemoveContainer" containerID="7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473" Oct 07 13:13:28 crc kubenswrapper[5024]: E1007 13:13:28.153053 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473\": container with ID starting with 7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473 not found: ID does not exist" containerID="7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.153229 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473"} err="failed to get container status \"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473\": rpc error: code = NotFound desc = could not find container \"7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473\": container with ID starting with 7315e2ed33f72f9580da89abe4cd5a24e0374d37f5d275e7d0dde2457c1dd473 not found: ID does not exist" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.153357 5024 scope.go:117] "RemoveContainer" containerID="3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885" Oct 07 13:13:28 crc kubenswrapper[5024]: E1007 13:13:28.153916 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885\": container with ID starting with 3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885 not found: ID does not exist" containerID="3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.154030 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885"} err="failed to get container status \"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885\": rpc error: code = NotFound desc = could not find container \"3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885\": container with ID starting with 3a0d1c8e8da4ac45ae2a558713d83cba3d36c9c6dd367812acf9028030a19885 not found: ID does not exist" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.154126 5024 scope.go:117] "RemoveContainer" containerID="deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f" Oct 07 13:13:28 crc kubenswrapper[5024]: E1007 13:13:28.154703 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f\": container with ID starting with deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f not found: ID does not exist" containerID="deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.154777 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f"} err="failed to get container status \"deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f\": rpc error: code = NotFound desc = could not find container \"deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f\": container with ID starting with deaf6f147722694eeb61d46e1625170bec35dd34dd61c2568f62c5f388dea22f not found: ID does not exist" Oct 07 13:13:28 crc kubenswrapper[5024]: I1007 13:13:28.772244 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" path="/var/lib/kubelet/pods/86d168e8-a919-401b-9ced-bfa6460bbb0a/volumes" Oct 07 13:13:30 crc kubenswrapper[5024]: I1007 13:13:30.849005 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:30 crc kubenswrapper[5024]: I1007 13:13:30.851989 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:30 crc kubenswrapper[5024]: I1007 13:13:30.949947 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:31 crc kubenswrapper[5024]: I1007 13:13:31.133912 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:31 crc kubenswrapper[5024]: I1007 13:13:31.870284 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.069392 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nll56" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="registry-server" containerID="cri-o://b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c" gracePeriod=2 Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.661635 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.834744 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities\") pod \"63bb6bad-087a-4e7e-84cd-557668be48e6\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.834962 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content\") pod \"63bb6bad-087a-4e7e-84cd-557668be48e6\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.835053 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wql\" (UniqueName: \"kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql\") pod \"63bb6bad-087a-4e7e-84cd-557668be48e6\" (UID: \"63bb6bad-087a-4e7e-84cd-557668be48e6\") " Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.836447 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities" (OuterVolumeSpecName: "utilities") pod "63bb6bad-087a-4e7e-84cd-557668be48e6" (UID: "63bb6bad-087a-4e7e-84cd-557668be48e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.846599 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql" (OuterVolumeSpecName: "kube-api-access-m4wql") pod "63bb6bad-087a-4e7e-84cd-557668be48e6" (UID: "63bb6bad-087a-4e7e-84cd-557668be48e6"). InnerVolumeSpecName "kube-api-access-m4wql". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.937864 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:33 crc kubenswrapper[5024]: I1007 13:13:33.939416 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wql\" (UniqueName: \"kubernetes.io/projected/63bb6bad-087a-4e7e-84cd-557668be48e6-kube-api-access-m4wql\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.081656 5024 generic.go:334] "Generic (PLEG): container finished" podID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerID="b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c" exitCode=0 Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.081727 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerDied","Data":"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c"} Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.081775 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nll56" event={"ID":"63bb6bad-087a-4e7e-84cd-557668be48e6","Type":"ContainerDied","Data":"357c19680faa83f1d92a684ed774d8a196d562ced14144436b40d7c9923f4a56"} Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.081772 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nll56" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.081819 5024 scope.go:117] "RemoveContainer" containerID="b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.116973 5024 scope.go:117] "RemoveContainer" containerID="9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.148431 5024 scope.go:117] "RemoveContainer" containerID="f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.185795 5024 scope.go:117] "RemoveContainer" containerID="b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c" Oct 07 13:13:34 crc kubenswrapper[5024]: E1007 13:13:34.186564 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c\": container with ID starting with b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c not found: ID does not exist" containerID="b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.186648 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c"} err="failed to get container status \"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c\": rpc error: code = NotFound desc = could not find container \"b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c\": container with ID starting with b8dc6ba49182c8ffc7e9aa0cdaab134d4c739fe613ccd8ead4fda4865e787d5c not found: ID does not exist" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.186691 5024 scope.go:117] "RemoveContainer" containerID="9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce" Oct 07 13:13:34 crc kubenswrapper[5024]: E1007 13:13:34.187014 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce\": container with ID starting with 9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce not found: ID does not exist" containerID="9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.187046 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce"} err="failed to get container status \"9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce\": rpc error: code = NotFound desc = could not find container \"9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce\": container with ID starting with 9566ef7e1aa018bc90e95dd3823e9698dfc5e2ad7491cfd4cc41e33c9bef4bce not found: ID does not exist" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.187069 5024 scope.go:117] "RemoveContainer" containerID="f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231" Oct 07 13:13:34 crc kubenswrapper[5024]: E1007 13:13:34.187319 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231\": container with ID starting with f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231 not found: ID does not exist" containerID="f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.187352 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231"} err="failed to get container status \"f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231\": rpc error: code = NotFound desc = could not find container \"f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231\": container with ID starting with f68eec53b48bcaafaea050e20e8f9060cb93187cfc81e3da39e2376cdcc7c231 not found: ID does not exist" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.805872 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63bb6bad-087a-4e7e-84cd-557668be48e6" (UID: "63bb6bad-087a-4e7e-84cd-557668be48e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:13:34 crc kubenswrapper[5024]: I1007 13:13:34.861441 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bb6bad-087a-4e7e-84cd-557668be48e6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:13:35 crc kubenswrapper[5024]: I1007 13:13:35.050308 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:35 crc kubenswrapper[5024]: I1007 13:13:35.065117 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nll56"] Oct 07 13:13:36 crc kubenswrapper[5024]: I1007 13:13:36.769233 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" path="/var/lib/kubelet/pods/63bb6bad-087a-4e7e-84cd-557668be48e6/volumes" Oct 07 13:14:13 crc kubenswrapper[5024]: I1007 13:14:13.720271 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:14:13 crc kubenswrapper[5024]: I1007 13:14:13.720804 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:14:43 crc kubenswrapper[5024]: I1007 13:14:43.720089 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:14:43 crc kubenswrapper[5024]: I1007 13:14:43.720901 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.169314 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm"] Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170396 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="extract-content" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170411 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="extract-content" Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170424 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="extract-utilities" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170430 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="extract-utilities" Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170439 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="extract-utilities" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170445 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="extract-utilities" Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170473 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170479 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170497 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170503 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: E1007 13:15:00.170515 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="extract-content" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170521 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="extract-content" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170702 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bb6bad-087a-4e7e-84cd-557668be48e6" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.170724 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d168e8-a919-401b-9ced-bfa6460bbb0a" containerName="registry-server" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.171398 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.177057 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.178017 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.184419 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm"] Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.271133 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.271231 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8hbj\" (UniqueName: \"kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.271266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.373403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.373518 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8hbj\" (UniqueName: \"kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.373548 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.375498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.387512 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.396627 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8hbj\" (UniqueName: \"kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj\") pod \"collect-profiles-29330715-p7htm\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:00 crc kubenswrapper[5024]: I1007 13:15:00.544523 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:01 crc kubenswrapper[5024]: I1007 13:15:01.056471 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm"] Oct 07 13:15:01 crc kubenswrapper[5024]: I1007 13:15:01.248261 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" event={"ID":"c45edbff-f53e-4b87-8a17-85618fdbfc3a","Type":"ContainerStarted","Data":"c559e5ef077d2c11f9ee5ffaa108ca5142b46b1527c1e418e72b7f1fc255bea4"} Oct 07 13:15:02 crc kubenswrapper[5024]: I1007 13:15:02.263005 5024 generic.go:334] "Generic (PLEG): container finished" podID="c45edbff-f53e-4b87-8a17-85618fdbfc3a" containerID="dac1fbd5f3f0938d592c58223599db712b5427c6c4356cd050e9785dfc35bc94" exitCode=0 Oct 07 13:15:02 crc kubenswrapper[5024]: I1007 13:15:02.263082 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" event={"ID":"c45edbff-f53e-4b87-8a17-85618fdbfc3a","Type":"ContainerDied","Data":"dac1fbd5f3f0938d592c58223599db712b5427c6c4356cd050e9785dfc35bc94"} Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.673288 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.746848 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8hbj\" (UniqueName: \"kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj\") pod \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.746900 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume\") pod \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.747275 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume\") pod \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\" (UID: \"c45edbff-f53e-4b87-8a17-85618fdbfc3a\") " Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.747593 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c45edbff-f53e-4b87-8a17-85618fdbfc3a" (UID: "c45edbff-f53e-4b87-8a17-85618fdbfc3a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.752756 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c45edbff-f53e-4b87-8a17-85618fdbfc3a" (UID: "c45edbff-f53e-4b87-8a17-85618fdbfc3a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.754609 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj" (OuterVolumeSpecName: "kube-api-access-m8hbj") pod "c45edbff-f53e-4b87-8a17-85618fdbfc3a" (UID: "c45edbff-f53e-4b87-8a17-85618fdbfc3a"). InnerVolumeSpecName "kube-api-access-m8hbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.864690 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c45edbff-f53e-4b87-8a17-85618fdbfc3a-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.864792 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8hbj\" (UniqueName: \"kubernetes.io/projected/c45edbff-f53e-4b87-8a17-85618fdbfc3a-kube-api-access-m8hbj\") on node \"crc\" DevicePath \"\"" Oct 07 13:15:03 crc kubenswrapper[5024]: I1007 13:15:03.864822 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c45edbff-f53e-4b87-8a17-85618fdbfc3a-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:15:04 crc kubenswrapper[5024]: I1007 13:15:04.289245 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" event={"ID":"c45edbff-f53e-4b87-8a17-85618fdbfc3a","Type":"ContainerDied","Data":"c559e5ef077d2c11f9ee5ffaa108ca5142b46b1527c1e418e72b7f1fc255bea4"} Oct 07 13:15:04 crc kubenswrapper[5024]: I1007 13:15:04.289325 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c559e5ef077d2c11f9ee5ffaa108ca5142b46b1527c1e418e72b7f1fc255bea4" Oct 07 13:15:04 crc kubenswrapper[5024]: I1007 13:15:04.289997 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm" Oct 07 13:15:04 crc kubenswrapper[5024]: I1007 13:15:04.781024 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z"] Oct 07 13:15:04 crc kubenswrapper[5024]: I1007 13:15:04.791126 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330670-5868z"] Oct 07 13:15:06 crc kubenswrapper[5024]: I1007 13:15:06.772358 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0532ff61-84a7-44b0-b8d3-d6ffad413de5" path="/var/lib/kubelet/pods/0532ff61-84a7-44b0-b8d3-d6ffad413de5/volumes" Oct 07 13:15:13 crc kubenswrapper[5024]: I1007 13:15:13.720781 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:15:13 crc kubenswrapper[5024]: I1007 13:15:13.721527 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:15:13 crc kubenswrapper[5024]: I1007 13:15:13.721572 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:15:13 crc kubenswrapper[5024]: I1007 13:15:13.722463 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:15:13 crc kubenswrapper[5024]: I1007 13:15:13.722539 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d" gracePeriod=600 Oct 07 13:15:14 crc kubenswrapper[5024]: I1007 13:15:14.417960 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d" exitCode=0 Oct 07 13:15:14 crc kubenswrapper[5024]: I1007 13:15:14.418062 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d"} Oct 07 13:15:14 crc kubenswrapper[5024]: I1007 13:15:14.418418 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd"} Oct 07 13:15:14 crc kubenswrapper[5024]: I1007 13:15:14.418447 5024 scope.go:117] "RemoveContainer" containerID="a8308219ca625c5aa91a94d0a3e514fe32b3247fe4a1f0fce8e285fbf997946b" Oct 07 13:15:48 crc kubenswrapper[5024]: I1007 13:15:48.466503 5024 scope.go:117] "RemoveContainer" containerID="bcddc5806317dd060af7266f263935171470854340ba0a07b4d171f5ff7f5e68" Oct 07 13:17:36 crc kubenswrapper[5024]: I1007 13:17:36.144936 5024 generic.go:334] "Generic (PLEG): container finished" podID="a204d343-554a-475d-8b5b-dc0a7d5a09c9" containerID="91d79fcaa356f8527679ba0a58e606ade87251dd7d7556be01c4f14c1b165bbd" exitCode=0 Oct 07 13:17:36 crc kubenswrapper[5024]: I1007 13:17:36.145112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" event={"ID":"a204d343-554a-475d-8b5b-dc0a7d5a09c9","Type":"ContainerDied","Data":"91d79fcaa356f8527679ba0a58e606ade87251dd7d7556be01c4f14c1b165bbd"} Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.651489 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801299 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801448 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcl9p\" (UniqueName: \"kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801523 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801759 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801808 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.801945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0\") pod \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\" (UID: \"a204d343-554a-475d-8b5b-dc0a7d5a09c9\") " Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.812566 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.813213 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph" (OuterVolumeSpecName: "ceph") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.813895 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p" (OuterVolumeSpecName: "kube-api-access-bcl9p") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "kube-api-access-bcl9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.836904 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.841955 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.857414 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory" (OuterVolumeSpecName: "inventory") pod "a204d343-554a-475d-8b5b-dc0a7d5a09c9" (UID: "a204d343-554a-475d-8b5b-dc0a7d5a09c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905227 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905264 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcl9p\" (UniqueName: \"kubernetes.io/projected/a204d343-554a-475d-8b5b-dc0a7d5a09c9-kube-api-access-bcl9p\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905278 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905287 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905300 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:37 crc kubenswrapper[5024]: I1007 13:17:37.905313 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a204d343-554a-475d-8b5b-dc0a7d5a09c9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.170366 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" event={"ID":"a204d343-554a-475d-8b5b-dc0a7d5a09c9","Type":"ContainerDied","Data":"4a0d88e88b303a998a404ae160e2ed970da360e72ba51f76c32c5d096b46d500"} Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.170424 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0d88e88b303a998a404ae160e2ed970da360e72ba51f76c32c5d096b46d500" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.170503 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.374129 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw"] Oct 07 13:17:38 crc kubenswrapper[5024]: E1007 13:17:38.374638 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45edbff-f53e-4b87-8a17-85618fdbfc3a" containerName="collect-profiles" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.374661 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45edbff-f53e-4b87-8a17-85618fdbfc3a" containerName="collect-profiles" Oct 07 13:17:38 crc kubenswrapper[5024]: E1007 13:17:38.374736 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a204d343-554a-475d-8b5b-dc0a7d5a09c9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.374747 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a204d343-554a-475d-8b5b-dc0a7d5a09c9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.374974 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45edbff-f53e-4b87-8a17-85618fdbfc3a" containerName="collect-profiles" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.374999 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a204d343-554a-475d-8b5b-dc0a7d5a09c9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.375813 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.378732 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.379033 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.379071 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.379174 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.381237 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.384223 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.384300 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.384614 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.384797 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-424lb" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.396307 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw"] Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.518778 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.518849 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519415 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519476 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519515 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519545 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519565 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519601 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnfn\" (UniqueName: \"kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519631 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519809 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.519932 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622731 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnfn\" (UniqueName: \"kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622818 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622864 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622937 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.622972 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.623004 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.623036 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.623066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.623868 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.624380 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.628220 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.628348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.628545 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.628800 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.629982 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.630860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.632010 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.632640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.643522 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnfn\" (UniqueName: \"kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:38 crc kubenswrapper[5024]: I1007 13:17:38.696600 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:17:39 crc kubenswrapper[5024]: I1007 13:17:39.244705 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw"] Oct 07 13:17:39 crc kubenswrapper[5024]: W1007 13:17:39.249346 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a3838c_53ed_4367_8b5d_35295d94823c.slice/crio-76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454 WatchSource:0}: Error finding container 76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454: Status 404 returned error can't find the container with id 76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454 Oct 07 13:17:39 crc kubenswrapper[5024]: I1007 13:17:39.253119 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:17:40 crc kubenswrapper[5024]: I1007 13:17:40.197650 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" event={"ID":"81a3838c-53ed-4367-8b5d-35295d94823c","Type":"ContainerStarted","Data":"76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454"} Oct 07 13:17:41 crc kubenswrapper[5024]: I1007 13:17:41.210678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" event={"ID":"81a3838c-53ed-4367-8b5d-35295d94823c","Type":"ContainerStarted","Data":"f77005114b696d809ca67480ec917b03fe325a193a59477d96d75417e0d8c99c"} Oct 07 13:17:41 crc kubenswrapper[5024]: I1007 13:17:41.244896 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" podStartSLOduration=2.177369483 podStartE2EDuration="3.244873778s" podCreationTimestamp="2025-10-07 13:17:38 +0000 UTC" firstStartedPulling="2025-10-07 13:17:39.252834984 +0000 UTC m=+2997.328621822" lastFinishedPulling="2025-10-07 13:17:40.320339259 +0000 UTC m=+2998.396126117" observedRunningTime="2025-10-07 13:17:41.233353605 +0000 UTC m=+2999.309140483" watchObservedRunningTime="2025-10-07 13:17:41.244873778 +0000 UTC m=+2999.320660626" Oct 07 13:17:43 crc kubenswrapper[5024]: I1007 13:17:43.720785 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:17:43 crc kubenswrapper[5024]: I1007 13:17:43.721171 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:18:13 crc kubenswrapper[5024]: I1007 13:18:13.720262 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:18:13 crc kubenswrapper[5024]: I1007 13:18:13.722676 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.720711 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.721571 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.721648 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.722824 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.722928 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" gracePeriod=600 Oct 07 13:18:43 crc kubenswrapper[5024]: E1007 13:18:43.856856 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.909674 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" exitCode=0 Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.909710 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd"} Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.910091 5024 scope.go:117] "RemoveContainer" containerID="44bf427e2b898dc1d86c2ae0246dcc7247832bdeeecf4b463527fc51982a929d" Oct 07 13:18:43 crc kubenswrapper[5024]: I1007 13:18:43.911030 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:18:43 crc kubenswrapper[5024]: E1007 13:18:43.911569 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:18:55 crc kubenswrapper[5024]: I1007 13:18:55.752700 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:18:55 crc kubenswrapper[5024]: E1007 13:18:55.754010 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:19:10 crc kubenswrapper[5024]: I1007 13:19:10.752122 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:19:10 crc kubenswrapper[5024]: E1007 13:19:10.754476 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:19:25 crc kubenswrapper[5024]: I1007 13:19:25.751429 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:19:25 crc kubenswrapper[5024]: E1007 13:19:25.752244 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:19:36 crc kubenswrapper[5024]: I1007 13:19:36.751905 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:19:36 crc kubenswrapper[5024]: E1007 13:19:36.752799 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:19:51 crc kubenswrapper[5024]: I1007 13:19:51.752737 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:19:51 crc kubenswrapper[5024]: E1007 13:19:51.753874 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:04 crc kubenswrapper[5024]: I1007 13:20:04.752654 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:20:04 crc kubenswrapper[5024]: E1007 13:20:04.754288 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:15 crc kubenswrapper[5024]: I1007 13:20:15.751491 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:20:15 crc kubenswrapper[5024]: E1007 13:20:15.752485 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:26 crc kubenswrapper[5024]: I1007 13:20:26.752449 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:20:26 crc kubenswrapper[5024]: E1007 13:20:26.753316 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:39 crc kubenswrapper[5024]: I1007 13:20:39.752433 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:20:39 crc kubenswrapper[5024]: E1007 13:20:39.753570 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:51 crc kubenswrapper[5024]: I1007 13:20:51.751745 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:20:51 crc kubenswrapper[5024]: E1007 13:20:51.752891 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.367883 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.370525 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.370629 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.436225 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.436381 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dbk\" (UniqueName: \"kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.436525 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.539029 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.539122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.539234 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dbk\" (UniqueName: \"kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.539807 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.539852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.562323 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dbk\" (UniqueName: \"kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk\") pod \"community-operators-j6svt\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:54 crc kubenswrapper[5024]: I1007 13:20:54.741610 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:20:55 crc kubenswrapper[5024]: I1007 13:20:55.335101 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:20:55 crc kubenswrapper[5024]: I1007 13:20:55.502544 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerStarted","Data":"9de68b26c38952b9bfd07642811b7fc42982dd66e6148fe9fa9db3d5cd1fb0e4"} Oct 07 13:20:56 crc kubenswrapper[5024]: I1007 13:20:56.517446 5024 generic.go:334] "Generic (PLEG): container finished" podID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerID="3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62" exitCode=0 Oct 07 13:20:56 crc kubenswrapper[5024]: I1007 13:20:56.517515 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerDied","Data":"3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62"} Oct 07 13:20:58 crc kubenswrapper[5024]: I1007 13:20:58.540642 5024 generic.go:334] "Generic (PLEG): container finished" podID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerID="608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260" exitCode=0 Oct 07 13:20:58 crc kubenswrapper[5024]: I1007 13:20:58.540727 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerDied","Data":"608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260"} Oct 07 13:20:59 crc kubenswrapper[5024]: I1007 13:20:59.563046 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerStarted","Data":"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee"} Oct 07 13:20:59 crc kubenswrapper[5024]: I1007 13:20:59.594788 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6svt" podStartSLOduration=2.967521767 podStartE2EDuration="5.594751773s" podCreationTimestamp="2025-10-07 13:20:54 +0000 UTC" firstStartedPulling="2025-10-07 13:20:56.520185389 +0000 UTC m=+3194.595972237" lastFinishedPulling="2025-10-07 13:20:59.147415405 +0000 UTC m=+3197.223202243" observedRunningTime="2025-10-07 13:20:59.591118388 +0000 UTC m=+3197.666905226" watchObservedRunningTime="2025-10-07 13:20:59.594751773 +0000 UTC m=+3197.670538611" Oct 07 13:21:04 crc kubenswrapper[5024]: I1007 13:21:04.742571 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:04 crc kubenswrapper[5024]: I1007 13:21:04.743297 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:04 crc kubenswrapper[5024]: I1007 13:21:04.753519 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:21:04 crc kubenswrapper[5024]: E1007 13:21:04.754084 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:21:04 crc kubenswrapper[5024]: I1007 13:21:04.820537 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:05 crc kubenswrapper[5024]: I1007 13:21:05.705241 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:05 crc kubenswrapper[5024]: I1007 13:21:05.780721 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:21:07 crc kubenswrapper[5024]: I1007 13:21:07.664308 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6svt" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="registry-server" containerID="cri-o://e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee" gracePeriod=2 Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.111497 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.296333 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6dbk\" (UniqueName: \"kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk\") pod \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.296439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content\") pod \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.296537 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities\") pod \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\" (UID: \"bea984d6-9d5c-4a48-bba9-04ae6e2f0247\") " Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.297661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities" (OuterVolumeSpecName: "utilities") pod "bea984d6-9d5c-4a48-bba9-04ae6e2f0247" (UID: "bea984d6-9d5c-4a48-bba9-04ae6e2f0247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.309539 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk" (OuterVolumeSpecName: "kube-api-access-l6dbk") pod "bea984d6-9d5c-4a48-bba9-04ae6e2f0247" (UID: "bea984d6-9d5c-4a48-bba9-04ae6e2f0247"). InnerVolumeSpecName "kube-api-access-l6dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.380665 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bea984d6-9d5c-4a48-bba9-04ae6e2f0247" (UID: "bea984d6-9d5c-4a48-bba9-04ae6e2f0247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.424423 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6dbk\" (UniqueName: \"kubernetes.io/projected/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-kube-api-access-l6dbk\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.424477 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.424493 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea984d6-9d5c-4a48-bba9-04ae6e2f0247-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.681641 5024 generic.go:334] "Generic (PLEG): container finished" podID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerID="e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee" exitCode=0 Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.681711 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerDied","Data":"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee"} Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.681753 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6svt" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.682307 5024 scope.go:117] "RemoveContainer" containerID="e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.682255 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6svt" event={"ID":"bea984d6-9d5c-4a48-bba9-04ae6e2f0247","Type":"ContainerDied","Data":"9de68b26c38952b9bfd07642811b7fc42982dd66e6148fe9fa9db3d5cd1fb0e4"} Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.727069 5024 scope.go:117] "RemoveContainer" containerID="608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.727950 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.743675 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6svt"] Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.766310 5024 scope.go:117] "RemoveContainer" containerID="3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.768681 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" path="/var/lib/kubelet/pods/bea984d6-9d5c-4a48-bba9-04ae6e2f0247/volumes" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.848666 5024 scope.go:117] "RemoveContainer" containerID="e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee" Oct 07 13:21:08 crc kubenswrapper[5024]: E1007 13:21:08.851070 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee\": container with ID starting with e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee not found: ID does not exist" containerID="e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.851168 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee"} err="failed to get container status \"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee\": rpc error: code = NotFound desc = could not find container \"e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee\": container with ID starting with e28c3e717009a0f4a57f05bef04850c5f9e944d53cb4d731cb763fcad71a36ee not found: ID does not exist" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.851225 5024 scope.go:117] "RemoveContainer" containerID="608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260" Oct 07 13:21:08 crc kubenswrapper[5024]: E1007 13:21:08.852030 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260\": container with ID starting with 608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260 not found: ID does not exist" containerID="608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.852079 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260"} err="failed to get container status \"608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260\": rpc error: code = NotFound desc = could not find container \"608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260\": container with ID starting with 608cc92cfd83e6dc01e47b3142cf6c1ed58f052849a915bee595704ad8dce260 not found: ID does not exist" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.852114 5024 scope.go:117] "RemoveContainer" containerID="3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62" Oct 07 13:21:08 crc kubenswrapper[5024]: E1007 13:21:08.852613 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62\": container with ID starting with 3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62 not found: ID does not exist" containerID="3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62" Oct 07 13:21:08 crc kubenswrapper[5024]: I1007 13:21:08.852655 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62"} err="failed to get container status \"3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62\": rpc error: code = NotFound desc = could not find container \"3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62\": container with ID starting with 3b7b983479a81f1c03e31e5868f7cbe8fe4cf8c57f824e5640d7448dc74cef62 not found: ID does not exist" Oct 07 13:21:19 crc kubenswrapper[5024]: I1007 13:21:19.752470 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:21:19 crc kubenswrapper[5024]: E1007 13:21:19.753675 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:21:30 crc kubenswrapper[5024]: I1007 13:21:30.753127 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:21:30 crc kubenswrapper[5024]: E1007 13:21:30.754185 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:21:42 crc kubenswrapper[5024]: I1007 13:21:42.763379 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:21:42 crc kubenswrapper[5024]: E1007 13:21:42.764845 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:21:49 crc kubenswrapper[5024]: I1007 13:21:49.184268 5024 generic.go:334] "Generic (PLEG): container finished" podID="81a3838c-53ed-4367-8b5d-35295d94823c" containerID="f77005114b696d809ca67480ec917b03fe325a193a59477d96d75417e0d8c99c" exitCode=0 Oct 07 13:21:49 crc kubenswrapper[5024]: I1007 13:21:49.184389 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" event={"ID":"81a3838c-53ed-4367-8b5d-35295d94823c","Type":"ContainerDied","Data":"f77005114b696d809ca67480ec917b03fe325a193a59477d96d75417e0d8c99c"} Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.673247 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766653 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766706 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766755 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766796 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766815 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766876 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766909 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nnfn\" (UniqueName: \"kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766935 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.766967 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.767067 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0\") pod \"81a3838c-53ed-4367-8b5d-35295d94823c\" (UID: \"81a3838c-53ed-4367-8b5d-35295d94823c\") " Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.773831 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph" (OuterVolumeSpecName: "ceph") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.774341 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.776186 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn" (OuterVolumeSpecName: "kube-api-access-2nnfn") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "kube-api-access-2nnfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.798426 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.805875 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.806689 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.810543 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.812896 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.812054 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.815457 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory" (OuterVolumeSpecName: "inventory") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.817587 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "81a3838c-53ed-4367-8b5d-35295d94823c" (UID: "81a3838c-53ed-4367-8b5d-35295d94823c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870132 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870184 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nnfn\" (UniqueName: \"kubernetes.io/projected/81a3838c-53ed-4367-8b5d-35295d94823c-kube-api-access-2nnfn\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870199 5024 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870212 5024 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870225 5024 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870238 5024 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870250 5024 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870264 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-inventory\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870279 5024 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870290 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81a3838c-53ed-4367-8b5d-35295d94823c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:50 crc kubenswrapper[5024]: I1007 13:21:50.870301 5024 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/81a3838c-53ed-4367-8b5d-35295d94823c-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Oct 07 13:21:51 crc kubenswrapper[5024]: I1007 13:21:51.217364 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" event={"ID":"81a3838c-53ed-4367-8b5d-35295d94823c","Type":"ContainerDied","Data":"76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454"} Oct 07 13:21:51 crc kubenswrapper[5024]: I1007 13:21:51.217773 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a3ca654ce9710bf48c6755b32bfc10d50442c4ba31df5eecca8d9b32051454" Oct 07 13:21:51 crc kubenswrapper[5024]: I1007 13:21:51.217500 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw" Oct 07 13:21:55 crc kubenswrapper[5024]: I1007 13:21:55.752321 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:21:55 crc kubenswrapper[5024]: E1007 13:21:55.753638 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.821115 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 07 13:22:06 crc kubenswrapper[5024]: E1007 13:22:06.832872 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="extract-content" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.832921 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="extract-content" Oct 07 13:22:06 crc kubenswrapper[5024]: E1007 13:22:06.832942 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="extract-utilities" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.832957 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="extract-utilities" Oct 07 13:22:06 crc kubenswrapper[5024]: E1007 13:22:06.833035 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a3838c-53ed-4367-8b5d-35295d94823c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.833048 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a3838c-53ed-4367-8b5d-35295d94823c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 07 13:22:06 crc kubenswrapper[5024]: E1007 13:22:06.833082 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="registry-server" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.833091 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="registry-server" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.833875 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a3838c-53ed-4367-8b5d-35295d94823c" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.833939 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea984d6-9d5c-4a48-bba9-04ae6e2f0247" containerName="registry-server" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.839278 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.842642 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.848253 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.852960 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.855178 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-run\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868465 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868672 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868751 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868804 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868881 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868903 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868937 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.868964 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.869064 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.869091 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.870778 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.870874 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7nh\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-kube-api-access-fd7nh\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.871018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.871053 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.903790 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.905748 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.907982 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.915918 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.972900 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-lib-modules\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.972969 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-run\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973014 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973033 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973050 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973070 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973085 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-sys\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973121 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973165 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973183 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973223 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973241 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-scripts\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973255 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973312 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973331 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973370 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973388 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-nvme\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973448 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-run\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973465 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-ceph\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973481 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-dev\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.973499 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7nh\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-kube-api-access-fd7nh\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.974463 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.974659 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.975600 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.975665 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.975686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-run\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.975849 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976048 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976119 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8vwc\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-kube-api-access-t8vwc\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976213 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data-custom\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976312 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976430 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976491 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976526 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.976685 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.982260 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.982567 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.982603 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.983577 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.992939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7nh\" (UniqueName: \"kubernetes.io/projected/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-kube-api-access-fd7nh\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:06 crc kubenswrapper[5024]: I1007 13:22:06.994965 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92bf1fe-58e8-4f42-bd82-bcde5acdf07e-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e\") " pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078709 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-run\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078779 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-ceph\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078810 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-dev\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078844 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078880 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8vwc\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-kube-api-access-t8vwc\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078909 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data-custom\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.078953 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-lib-modules\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079018 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079050 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079082 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-sys\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079256 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-scripts\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-nvme\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079530 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-dev\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079556 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-nvme\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-run\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079697 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-sys\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079735 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-lib-modules\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079768 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.079801 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.080705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.080770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/893e2de7-0613-4750-a8a3-6630394129aa-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.084384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-scripts\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.084461 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.084738 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-ceph\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.085763 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.088758 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/893e2de7-0613-4750-a8a3-6630394129aa-config-data-custom\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.100857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8vwc\" (UniqueName: \"kubernetes.io/projected/893e2de7-0613-4750-a8a3-6630394129aa-kube-api-access-t8vwc\") pod \"cinder-backup-0\" (UID: \"893e2de7-0613-4750-a8a3-6630394129aa\") " pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.177058 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.232059 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.501107 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-75p7j"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.502930 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-75p7j" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.517071 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-75p7j"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.574594 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.576385 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.581736 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.581916 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.582040 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.582160 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7g54t" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.592365 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7n7\" (UniqueName: \"kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7\") pod \"manila-db-create-75p7j\" (UID: \"66c8aa1b-83b7-456a-a8be-770922c03068\") " pod="openstack/manila-db-create-75p7j" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.592758 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.647209 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.648921 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.661447 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.662082 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.662177 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.662388 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k8xkf" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.693759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7n7\" (UniqueName: \"kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7\") pod \"manila-db-create-75p7j\" (UID: \"66c8aa1b-83b7-456a-a8be-770922c03068\") " pod="openstack/manila-db-create-75p7j" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.705201 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.718438 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7n7\" (UniqueName: \"kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7\") pod \"manila-db-create-75p7j\" (UID: \"66c8aa1b-83b7-456a-a8be-770922c03068\") " pod="openstack/manila-db-create-75p7j" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.751915 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.753576 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.761491 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.783334 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.785534 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.792113 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.792375 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795808 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795859 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795886 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vftdp\" (UniqueName: \"kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795930 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795962 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvwft\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.795981 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796011 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796032 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796094 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.796172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.797130 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.823729 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-75p7j" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.902503 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906332 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906433 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906660 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.906963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vftdp\" (UniqueName: \"kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.907035 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.907177 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909200 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvwft\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909307 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909393 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909698 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909799 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909896 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.909983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.907799 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.910080 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.907820 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.910771 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.911991 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912065 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912088 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twfd4\" (UniqueName: \"kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912151 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912191 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912238 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912284 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912340 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912368 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7kl\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912396 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912421 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.912792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.914073 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.914211 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.914316 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.914729 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.915730 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.915927 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.918053 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.918399 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.928466 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvwft\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.930719 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vftdp\" (UniqueName: \"kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp\") pod \"horizon-686c88cb4f-mfxbx\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:07 crc kubenswrapper[5024]: I1007 13:22:07.947545 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013807 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013852 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7kl\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013878 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013897 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.013949 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014008 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014075 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014102 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014124 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014158 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014180 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014198 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twfd4\" (UniqueName: \"kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014538 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.014792 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.015247 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.016451 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.018128 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.024838 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.024929 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.025090 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.025267 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.026077 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.026088 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.030123 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.031285 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.039250 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twfd4\" (UniqueName: \"kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4\") pod \"horizon-5c888bc74f-qhpjd\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.050353 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7kl\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.055535 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.072495 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.136874 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.221684 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.319329 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-75p7j"] Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.420789 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-75p7j" event={"ID":"66c8aa1b-83b7-456a-a8be-770922c03068","Type":"ContainerStarted","Data":"9b16ccdaba86a58cf09c12346689b1ed86abd5a4c086f13eeb0017ffd0538dc6"} Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.427127 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e","Type":"ContainerStarted","Data":"512405f7cfe37f81620de2c361d57d18108948bb1f65a420f1e7a103f33c7c0b"} Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.506620 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.516428 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.663207 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.818525 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:08 crc kubenswrapper[5024]: I1007 13:22:08.867099 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.441662 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"893e2de7-0613-4750-a8a3-6630394129aa","Type":"ContainerStarted","Data":"cbe5b9aa054ae65c252c5c4266a4c18f47c9440460894987d44204d566bdc321"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.447830 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerStarted","Data":"3bcfdb7842fee653ff28a17ea7dc21269036aa976430c39a8b288fc052cc792e"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.450839 5024 generic.go:334] "Generic (PLEG): container finished" podID="66c8aa1b-83b7-456a-a8be-770922c03068" containerID="21ea78c85e895b77ef72c288a5a169fbf080ed41efd1a385d05e57688ede3e53" exitCode=0 Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.451338 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-75p7j" event={"ID":"66c8aa1b-83b7-456a-a8be-770922c03068","Type":"ContainerDied","Data":"21ea78c85e895b77ef72c288a5a169fbf080ed41efd1a385d05e57688ede3e53"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.456813 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerStarted","Data":"68b5498277e1d8519739481ee915d988a3609121a3bb02ad33e70d536d8e16d3"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.493933 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e","Type":"ContainerStarted","Data":"e4a0d847c178317bb4ab9f474f6f9195d9ef87f3dc89f93ad9a472079e4ebb8f"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.496924 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerStarted","Data":"d5dbfb080dce90d6b735930d37f09d859f96ad9a5823c2b6341295974670ae83"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.498368 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerStarted","Data":"2cb8252a3b55803fd348191a4d31ef8377fd8aa9ee51669c5fb99633c8149523"} Oct 07 13:22:09 crc kubenswrapper[5024]: I1007 13:22:09.753517 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:22:09 crc kubenswrapper[5024]: E1007 13:22:09.757346 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.224469 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.249268 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.251904 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.253802 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.255170 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.263264 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.368882 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370193 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370264 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370306 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lmx\" (UniqueName: \"kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370370 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370448 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.370623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.389253 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.405387 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6748775596-w8q6s"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.408330 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.419038 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6748775596-w8q6s"] Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477481 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477546 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qj4\" (UniqueName: \"kubernetes.io/projected/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-kube-api-access-z2qj4\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477636 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477656 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-scripts\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477677 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-combined-ca-bundle\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477730 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-secret-key\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477774 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-logs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477805 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-config-data\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477823 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lmx\" (UniqueName: \"kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.477909 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-tls-certs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.479837 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.481824 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.482555 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.487738 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.490526 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.497963 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lmx\" (UniqueName: \"kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.521755 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs\") pod \"horizon-6db8948-lncr7\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.545287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerStarted","Data":"d91c11f2e47da54239805c6db43d78dd285759eda71943be266085b0d43ca093"} Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.552938 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"893e2de7-0613-4750-a8a3-6630394129aa","Type":"ContainerStarted","Data":"53c082b0cbb6c3b76af13be045a09393c5f86706bc0112937df0f83793e2bd85"} Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.569473 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerStarted","Data":"f372df94ecfed2cd6537ea9788a35f816b10a6254c20d38cc7eba1c2c2222905"} Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.577818 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d92bf1fe-58e8-4f42-bd82-bcde5acdf07e","Type":"ContainerStarted","Data":"4db0c77610d06d961c0135d6dfd7e1b083ee5e241d3bd7c19d88d8c0d28646d5"} Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.584787 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-logs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.584923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-config-data\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-tls-certs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2qj4\" (UniqueName: \"kubernetes.io/projected/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-kube-api-access-z2qj4\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585359 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-scripts\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585402 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-combined-ca-bundle\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-secret-key\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.585670 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-logs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.586513 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-scripts\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.590497 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-config-data\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.595643 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-tls-certs\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.597406 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-combined-ca-bundle\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.601823 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-horizon-secret-key\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.607847 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.612543 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.427928578 podStartE2EDuration="4.612481198s" podCreationTimestamp="2025-10-07 13:22:06 +0000 UTC" firstStartedPulling="2025-10-07 13:22:07.922467296 +0000 UTC m=+3265.998254134" lastFinishedPulling="2025-10-07 13:22:09.107019916 +0000 UTC m=+3267.182806754" observedRunningTime="2025-10-07 13:22:10.601928073 +0000 UTC m=+3268.677714911" watchObservedRunningTime="2025-10-07 13:22:10.612481198 +0000 UTC m=+3268.688268036" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.620823 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2qj4\" (UniqueName: \"kubernetes.io/projected/19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c-kube-api-access-z2qj4\") pod \"horizon-6748775596-w8q6s\" (UID: \"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c\") " pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:10 crc kubenswrapper[5024]: I1007 13:22:10.894887 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.165451 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-75p7j" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.211680 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h7n7\" (UniqueName: \"kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7\") pod \"66c8aa1b-83b7-456a-a8be-770922c03068\" (UID: \"66c8aa1b-83b7-456a-a8be-770922c03068\") " Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.221767 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7" (OuterVolumeSpecName: "kube-api-access-5h7n7") pod "66c8aa1b-83b7-456a-a8be-770922c03068" (UID: "66c8aa1b-83b7-456a-a8be-770922c03068"). InnerVolumeSpecName "kube-api-access-5h7n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.319865 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h7n7\" (UniqueName: \"kubernetes.io/projected/66c8aa1b-83b7-456a-a8be-770922c03068-kube-api-access-5h7n7\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.335059 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.505274 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6748775596-w8q6s"] Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.614199 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerStarted","Data":"4d0a3e0e10e9f2a1f07ea423c3e436ff26992679f4af696fd0859514beb50482"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.614302 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-log" containerID="cri-o://d91c11f2e47da54239805c6db43d78dd285759eda71943be266085b0d43ca093" gracePeriod=30 Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.614387 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-httpd" containerID="cri-o://4d0a3e0e10e9f2a1f07ea423c3e436ff26992679f4af696fd0859514beb50482" gracePeriod=30 Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.623971 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"893e2de7-0613-4750-a8a3-6630394129aa","Type":"ContainerStarted","Data":"5fe1ab5e05fe3684b2bd465cc79b852081320689d7664e0f27583f97013d6f1c"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.652316 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.652251461 podStartE2EDuration="4.652251461s" podCreationTimestamp="2025-10-07 13:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:22:11.633273172 +0000 UTC m=+3269.709060010" watchObservedRunningTime="2025-10-07 13:22:11.652251461 +0000 UTC m=+3269.728038289" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.657756 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6748775596-w8q6s" event={"ID":"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c","Type":"ContainerStarted","Data":"3585738c3c6a707fe0356eae85f75f5912a6067369745f940fc60d684fa45eaf"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.664432 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-75p7j" event={"ID":"66c8aa1b-83b7-456a-a8be-770922c03068","Type":"ContainerDied","Data":"9b16ccdaba86a58cf09c12346689b1ed86abd5a4c086f13eeb0017ffd0538dc6"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.664476 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b16ccdaba86a58cf09c12346689b1ed86abd5a4c086f13eeb0017ffd0538dc6" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.664531 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-75p7j" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.664593 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.422586396 podStartE2EDuration="5.664572228s" podCreationTimestamp="2025-10-07 13:22:06 +0000 UTC" firstStartedPulling="2025-10-07 13:22:08.546302078 +0000 UTC m=+3266.622088926" lastFinishedPulling="2025-10-07 13:22:09.78828792 +0000 UTC m=+3267.864074758" observedRunningTime="2025-10-07 13:22:11.658050599 +0000 UTC m=+3269.733837437" watchObservedRunningTime="2025-10-07 13:22:11.664572228 +0000 UTC m=+3269.740359066" Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.673929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerStarted","Data":"969b250f1f79f3be31eb7e30884b1ed390cea55a1d4883b37b2cd455003bd115"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.676728 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerStarted","Data":"15e9ad5d9add1b62cc3bd3265a09a611542ead625d7924001b35f202db211b2f"} Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.676978 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-log" containerID="cri-o://f372df94ecfed2cd6537ea9788a35f816b10a6254c20d38cc7eba1c2c2222905" gracePeriod=30 Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.677452 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-httpd" containerID="cri-o://15e9ad5d9add1b62cc3bd3265a09a611542ead625d7924001b35f202db211b2f" gracePeriod=30 Oct 07 13:22:11 crc kubenswrapper[5024]: I1007 13:22:11.705551 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.705527652 podStartE2EDuration="4.705527652s" podCreationTimestamp="2025-10-07 13:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:22:11.696695207 +0000 UTC m=+3269.772482045" watchObservedRunningTime="2025-10-07 13:22:11.705527652 +0000 UTC m=+3269.781314490" Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.177467 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.233252 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.695772 5024 generic.go:334] "Generic (PLEG): container finished" podID="098c1243-2802-4c9e-8627-57ccbed37148" containerID="15e9ad5d9add1b62cc3bd3265a09a611542ead625d7924001b35f202db211b2f" exitCode=0 Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.696051 5024 generic.go:334] "Generic (PLEG): container finished" podID="098c1243-2802-4c9e-8627-57ccbed37148" containerID="f372df94ecfed2cd6537ea9788a35f816b10a6254c20d38cc7eba1c2c2222905" exitCode=143 Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.695960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerDied","Data":"15e9ad5d9add1b62cc3bd3265a09a611542ead625d7924001b35f202db211b2f"} Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.696113 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerDied","Data":"f372df94ecfed2cd6537ea9788a35f816b10a6254c20d38cc7eba1c2c2222905"} Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.698483 5024 generic.go:334] "Generic (PLEG): container finished" podID="63a9d721-ab0c-453a-9f04-5614717802ca" containerID="4d0a3e0e10e9f2a1f07ea423c3e436ff26992679f4af696fd0859514beb50482" exitCode=0 Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.698511 5024 generic.go:334] "Generic (PLEG): container finished" podID="63a9d721-ab0c-453a-9f04-5614717802ca" containerID="d91c11f2e47da54239805c6db43d78dd285759eda71943be266085b0d43ca093" exitCode=143 Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.698591 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerDied","Data":"4d0a3e0e10e9f2a1f07ea423c3e436ff26992679f4af696fd0859514beb50482"} Oct 07 13:22:12 crc kubenswrapper[5024]: I1007 13:22:12.698654 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerDied","Data":"d91c11f2e47da54239805c6db43d78dd285759eda71943be266085b0d43ca093"} Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.479040 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.503911 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.646364 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-75ff-account-create-5l2r7"] Oct 07 13:22:17 crc kubenswrapper[5024]: E1007 13:22:17.646857 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c8aa1b-83b7-456a-a8be-770922c03068" containerName="mariadb-database-create" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.646875 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c8aa1b-83b7-456a-a8be-770922c03068" containerName="mariadb-database-create" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.647085 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c8aa1b-83b7-456a-a8be-770922c03068" containerName="mariadb-database-create" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.647881 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.651625 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.656389 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-75ff-account-create-5l2r7"] Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.781618 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqr8\" (UniqueName: \"kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8\") pod \"manila-75ff-account-create-5l2r7\" (UID: \"27f6d105-987d-4f6b-b9cb-7613ec68c794\") " pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.883614 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nqr8\" (UniqueName: \"kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8\") pod \"manila-75ff-account-create-5l2r7\" (UID: \"27f6d105-987d-4f6b-b9cb-7613ec68c794\") " pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.904726 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nqr8\" (UniqueName: \"kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8\") pod \"manila-75ff-account-create-5l2r7\" (UID: \"27f6d105-987d-4f6b-b9cb-7613ec68c794\") " pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:17 crc kubenswrapper[5024]: I1007 13:22:17.996534 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.124332 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.127386 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189776 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189835 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189856 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189895 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189921 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.189938 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190012 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190047 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190103 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190147 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7kl\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190199 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190229 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvwft\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190270 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190297 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190345 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190380 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs\") pod \"098c1243-2802-4c9e-8627-57ccbed37148\" (UID: \"098c1243-2802-4c9e-8627-57ccbed37148\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.190444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle\") pod \"63a9d721-ab0c-453a-9f04-5614717802ca\" (UID: \"63a9d721-ab0c-453a-9f04-5614717802ca\") " Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.196587 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs" (OuterVolumeSpecName: "logs") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.196837 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.197687 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs" (OuterVolumeSpecName: "logs") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.199538 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.208644 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl" (OuterVolumeSpecName: "kube-api-access-sn7kl") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "kube-api-access-sn7kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.209429 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts" (OuterVolumeSpecName: "scripts") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.212607 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.213521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph" (OuterVolumeSpecName: "ceph") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.214809 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts" (OuterVolumeSpecName: "scripts") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.220612 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft" (OuterVolumeSpecName: "kube-api-access-rvwft") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "kube-api-access-rvwft". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.220654 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph" (OuterVolumeSpecName: "ceph") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:18 crc kubenswrapper[5024]: I1007 13:22:18.220738 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.264795 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.271570 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.277766 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292668 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292700 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvwft\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-kube-api-access-rvwft\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292719 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292730 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292739 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a9d721-ab0c-453a-9f04-5614717802ca-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292748 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292758 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292766 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/098c1243-2802-4c9e-8627-57ccbed37148-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292776 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292784 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292794 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292802 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292810 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/098c1243-2802-4c9e-8627-57ccbed37148-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292819 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.292828 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn7kl\" (UniqueName: \"kubernetes.io/projected/63a9d721-ab0c-453a-9f04-5614717802ca-kube-api-access-sn7kl\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.297006 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data" (OuterVolumeSpecName: "config-data") pod "098c1243-2802-4c9e-8627-57ccbed37148" (UID: "098c1243-2802-4c9e-8627-57ccbed37148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.319130 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.322004 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.339177 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data" (OuterVolumeSpecName: "config-data") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.360348 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "63a9d721-ab0c-453a-9f04-5614717802ca" (UID: "63a9d721-ab0c-453a-9f04-5614717802ca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.395242 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.395268 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.395278 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.395287 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a9d721-ab0c-453a-9f04-5614717802ca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.395297 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c1243-2802-4c9e-8627-57ccbed37148-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.552241 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-75ff-account-create-5l2r7"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.767245 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686c88cb4f-mfxbx" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon-log" containerID="cri-o://0caeb7f74530ccbe28fe457cda7a6618a4a9ef4b74dd87e92d39c33040d43f77" gracePeriod=30 Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.768226 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686c88cb4f-mfxbx" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon" containerID="cri-o://702abaede61160b54f412c70077321a58fd2e447f5b3ecd343f76774cca95d11" gracePeriod=30 Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.771082 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerStarted","Data":"702abaede61160b54f412c70077321a58fd2e447f5b3ecd343f76774cca95d11"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.771109 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerStarted","Data":"0caeb7f74530ccbe28fe457cda7a6618a4a9ef4b74dd87e92d39c33040d43f77"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.774205 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerStarted","Data":"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.774254 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerStarted","Data":"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.780859 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"098c1243-2802-4c9e-8627-57ccbed37148","Type":"ContainerDied","Data":"68b5498277e1d8519739481ee915d988a3609121a3bb02ad33e70d536d8e16d3"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.780924 5024 scope.go:117] "RemoveContainer" containerID="15e9ad5d9add1b62cc3bd3265a09a611542ead625d7924001b35f202db211b2f" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.781159 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.803123 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-686c88cb4f-mfxbx" podStartSLOduration=2.688443172 podStartE2EDuration="11.803096771s" podCreationTimestamp="2025-10-07 13:22:07 +0000 UTC" firstStartedPulling="2025-10-07 13:22:09.012522773 +0000 UTC m=+3267.088309631" lastFinishedPulling="2025-10-07 13:22:18.127176382 +0000 UTC m=+3276.202963230" observedRunningTime="2025-10-07 13:22:18.785349708 +0000 UTC m=+3276.861136596" watchObservedRunningTime="2025-10-07 13:22:18.803096771 +0000 UTC m=+3276.878883619" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.820872 5024 scope.go:117] "RemoveContainer" containerID="f372df94ecfed2cd6537ea9788a35f816b10a6254c20d38cc7eba1c2c2222905" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.857745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerStarted","Data":"81923d13bdcea8f82d78a1af65d86dbaf0c2a59e592fc6134882437bbad17bb0"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.857781 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerStarted","Data":"3f31f975f6f8e6ea362b17462ce5ec881b0b750045b35c230f48f5cc59b9a3ae"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.857831 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c888bc74f-qhpjd" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon" containerID="cri-o://81923d13bdcea8f82d78a1af65d86dbaf0c2a59e592fc6134882437bbad17bb0" gracePeriod=30 Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.857832 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c888bc74f-qhpjd" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon-log" containerID="cri-o://3f31f975f6f8e6ea362b17462ce5ec881b0b750045b35c230f48f5cc59b9a3ae" gracePeriod=30 Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.863471 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.863459 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63a9d721-ab0c-453a-9f04-5614717802ca","Type":"ContainerDied","Data":"2cb8252a3b55803fd348191a4d31ef8377fd8aa9ee51669c5fb99633c8149523"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.868599 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-75ff-account-create-5l2r7" event={"ID":"27f6d105-987d-4f6b-b9cb-7613ec68c794","Type":"ContainerStarted","Data":"cc349360be6e14b4a8dcad77aed9c517c2000836ad09188e31b75999acdae976"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.868640 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-75ff-account-create-5l2r7" event={"ID":"27f6d105-987d-4f6b-b9cb-7613ec68c794","Type":"ContainerStarted","Data":"bd5affd2d263c10a0f93c79586cd0cff052769f6388be03ddf5abf55fc5381a6"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.878061 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6748775596-w8q6s" event={"ID":"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c","Type":"ContainerStarted","Data":"c53bce9e5eeb513ea15f03cb006654aec6005d3c461c94c4b441e60504085a7b"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.878131 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6748775596-w8q6s" event={"ID":"19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c","Type":"ContainerStarted","Data":"4a7275c84323538b5828d116afad6e22e95a3c3539d7626f05013b76e9aeffcb"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.883809 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6db8948-lncr7" podStartSLOduration=2.165777954 podStartE2EDuration="8.883778545s" podCreationTimestamp="2025-10-07 13:22:10 +0000 UTC" firstStartedPulling="2025-10-07 13:22:11.360451312 +0000 UTC m=+3269.436238140" lastFinishedPulling="2025-10-07 13:22:18.078451893 +0000 UTC m=+3276.154238731" observedRunningTime="2025-10-07 13:22:18.8732525 +0000 UTC m=+3276.949039358" watchObservedRunningTime="2025-10-07 13:22:18.883778545 +0000 UTC m=+3276.959565383" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.900255 5024 scope.go:117] "RemoveContainer" containerID="4d0a3e0e10e9f2a1f07ea423c3e436ff26992679f4af696fd0859514beb50482" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.916311 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.945685 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.962372 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: E1007 13:22:18.962828 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.962840 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: E1007 13:22:18.962862 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.962868 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: E1007 13:22:18.962881 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.962887 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: E1007 13:22:18.962904 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.962910 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.963106 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.963114 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.963152 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="098c1243-2802-4c9e-8627-57ccbed37148" containerName="glance-log" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.963167 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" containerName="glance-httpd" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.964244 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.968707 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.969003 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.969199 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k8xkf" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.969352 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:18.982619 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-75ff-account-create-5l2r7" podStartSLOduration=1.982586232 podStartE2EDuration="1.982586232s" podCreationTimestamp="2025-10-07 13:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:22:18.916097759 +0000 UTC m=+3276.991884597" watchObservedRunningTime="2025-10-07 13:22:18.982586232 +0000 UTC m=+3277.058373070" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.016810 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.016882 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017095 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017156 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017249 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017346 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017419 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017476 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrx8\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-kube-api-access-cfrx8\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.017568 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-logs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.049772 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.073639 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.088294 5024 scope.go:117] "RemoveContainer" containerID="d91c11f2e47da54239805c6db43d78dd285759eda71943be266085b0d43ca093" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.118363 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119358 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119430 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119481 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfrx8\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-kube-api-access-cfrx8\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-logs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119615 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119689 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.119764 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.124494 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.126224 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.126958 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-logs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.129219 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4962d6d9-454e-474e-a4ca-f998de4bf476-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.137348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.139214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.144398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.157633 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4962d6d9-454e-474e-a4ca-f998de4bf476-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.157725 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.160599 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.165520 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.166370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfrx8\" (UniqueName: \"kubernetes.io/projected/4962d6d9-454e-474e-a4ca-f998de4bf476-kube-api-access-cfrx8\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.166536 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.167747 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c888bc74f-qhpjd" podStartSLOduration=2.688918264 podStartE2EDuration="12.167731676s" podCreationTimestamp="2025-10-07 13:22:07 +0000 UTC" firstStartedPulling="2025-10-07 13:22:08.546427062 +0000 UTC m=+3266.622213900" lastFinishedPulling="2025-10-07 13:22:18.025240464 +0000 UTC m=+3276.101027312" observedRunningTime="2025-10-07 13:22:18.98112744 +0000 UTC m=+3277.056914278" watchObservedRunningTime="2025-10-07 13:22:19.167731676 +0000 UTC m=+3277.243518514" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.194816 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.213619 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"4962d6d9-454e-474e-a4ca-f998de4bf476\") " pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.218925 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6748775596-w8q6s" podStartSLOduration=2.723103072 podStartE2EDuration="9.218904026s" podCreationTimestamp="2025-10-07 13:22:10 +0000 UTC" firstStartedPulling="2025-10-07 13:22:11.556003368 +0000 UTC m=+3269.631790206" lastFinishedPulling="2025-10-07 13:22:18.051804322 +0000 UTC m=+3276.127591160" observedRunningTime="2025-10-07 13:22:19.064074488 +0000 UTC m=+3277.139861316" watchObservedRunningTime="2025-10-07 13:22:19.218904026 +0000 UTC m=+3277.294690854" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324280 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324353 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gj7\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-kube-api-access-h6gj7\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324390 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324420 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324461 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324649 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324723 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.324787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.362529 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.426919 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6gj7\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-kube-api-access-h6gj7\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427318 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427355 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427375 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427708 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.427842 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.428059 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a01461-d836-42d8-99c4-6d21acf59856-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.432119 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.434341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.437473 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.439080 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.444342 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a01461-d836-42d8-99c4-6d21acf59856-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.450478 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6gj7\" (UniqueName: \"kubernetes.io/projected/c1a01461-d836-42d8-99c4-6d21acf59856-kube-api-access-h6gj7\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.480979 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1a01461-d836-42d8-99c4-6d21acf59856\") " pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.568912 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.902595 5024 generic.go:334] "Generic (PLEG): container finished" podID="27f6d105-987d-4f6b-b9cb-7613ec68c794" containerID="cc349360be6e14b4a8dcad77aed9c517c2000836ad09188e31b75999acdae976" exitCode=0 Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.902857 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-75ff-account-create-5l2r7" event={"ID":"27f6d105-987d-4f6b-b9cb-7613ec68c794","Type":"ContainerDied","Data":"cc349360be6e14b4a8dcad77aed9c517c2000836ad09188e31b75999acdae976"} Oct 07 13:22:19 crc kubenswrapper[5024]: I1007 13:22:19.938565 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 07 13:22:19 crc kubenswrapper[5024]: W1007 13:22:19.949182 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4962d6d9_454e_474e_a4ca_f998de4bf476.slice/crio-02d53a9439d0f3b19a36146704bb9d5da25795f81ec2b4be7e0e41ae3045c54a WatchSource:0}: Error finding container 02d53a9439d0f3b19a36146704bb9d5da25795f81ec2b4be7e0e41ae3045c54a: Status 404 returned error can't find the container with id 02d53a9439d0f3b19a36146704bb9d5da25795f81ec2b4be7e0e41ae3045c54a Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.185465 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.608269 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.608322 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.774240 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098c1243-2802-4c9e-8627-57ccbed37148" path="/var/lib/kubelet/pods/098c1243-2802-4c9e-8627-57ccbed37148/volumes" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.775634 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a9d721-ab0c-453a-9f04-5614717802ca" path="/var/lib/kubelet/pods/63a9d721-ab0c-453a-9f04-5614717802ca/volumes" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.896000 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.896120 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.943941 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4962d6d9-454e-474e-a4ca-f998de4bf476","Type":"ContainerStarted","Data":"51c64daa4c259255836082f5cde4b980e890defb31ea4c590062d1f2e18d77ee"} Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.944089 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4962d6d9-454e-474e-a4ca-f998de4bf476","Type":"ContainerStarted","Data":"02d53a9439d0f3b19a36146704bb9d5da25795f81ec2b4be7e0e41ae3045c54a"} Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.949215 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a01461-d836-42d8-99c4-6d21acf59856","Type":"ContainerStarted","Data":"c6c9a36a00d29f5283d7cecee383a497040f4610d8590db59ecc38a49ff8f5a6"} Oct 07 13:22:20 crc kubenswrapper[5024]: I1007 13:22:20.949298 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a01461-d836-42d8-99c4-6d21acf59856","Type":"ContainerStarted","Data":"77d0d9b93f3671fa07cd5994867cc11bc45ade9499637271043bf53ff0a20c46"} Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.452227 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.583856 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nqr8\" (UniqueName: \"kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8\") pod \"27f6d105-987d-4f6b-b9cb-7613ec68c794\" (UID: \"27f6d105-987d-4f6b-b9cb-7613ec68c794\") " Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.592543 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8" (OuterVolumeSpecName: "kube-api-access-7nqr8") pod "27f6d105-987d-4f6b-b9cb-7613ec68c794" (UID: "27f6d105-987d-4f6b-b9cb-7613ec68c794"). InnerVolumeSpecName "kube-api-access-7nqr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.692704 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nqr8\" (UniqueName: \"kubernetes.io/projected/27f6d105-987d-4f6b-b9cb-7613ec68c794-kube-api-access-7nqr8\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.965320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-75ff-account-create-5l2r7" event={"ID":"27f6d105-987d-4f6b-b9cb-7613ec68c794","Type":"ContainerDied","Data":"bd5affd2d263c10a0f93c79586cd0cff052769f6388be03ddf5abf55fc5381a6"} Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.965387 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5affd2d263c10a0f93c79586cd0cff052769f6388be03ddf5abf55fc5381a6" Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.965397 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-75ff-account-create-5l2r7" Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.971183 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4962d6d9-454e-474e-a4ca-f998de4bf476","Type":"ContainerStarted","Data":"b282a20f1b37c3499b961a3628395abab6b19733d98865e9fd6beec1197875d7"} Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.986596 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a01461-d836-42d8-99c4-6d21acf59856","Type":"ContainerStarted","Data":"648a015a43b00410633448c226fb6062c4b5e058c2d91b4b349ab8b9d26c8bf1"} Oct 07 13:22:21 crc kubenswrapper[5024]: I1007 13:22:21.995867 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.995838332 podStartE2EDuration="3.995838332s" podCreationTimestamp="2025-10-07 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:22:21.993957308 +0000 UTC m=+3280.069744156" watchObservedRunningTime="2025-10-07 13:22:21.995838332 +0000 UTC m=+3280.071625170" Oct 07 13:22:22 crc kubenswrapper[5024]: I1007 13:22:22.028398 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.028379344 podStartE2EDuration="4.028379344s" podCreationTimestamp="2025-10-07 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:22:22.017322614 +0000 UTC m=+3280.093109462" watchObservedRunningTime="2025-10-07 13:22:22.028379344 +0000 UTC m=+3280.104166182" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.183686 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-6fppq"] Oct 07 13:22:23 crc kubenswrapper[5024]: E1007 13:22:23.184489 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f6d105-987d-4f6b-b9cb-7613ec68c794" containerName="mariadb-account-create" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.184519 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f6d105-987d-4f6b-b9cb-7613ec68c794" containerName="mariadb-account-create" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.184763 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f6d105-987d-4f6b-b9cb-7613ec68c794" containerName="mariadb-account-create" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.185564 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.189552 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.190023 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-269wc" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.202258 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-6fppq"] Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.240561 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjtk\" (UniqueName: \"kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.240625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.240687 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.240719 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.343413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.343489 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.343663 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjtk\" (UniqueName: \"kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.344044 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.351210 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.352236 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.353184 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.367843 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjtk\" (UniqueName: \"kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk\") pod \"manila-db-sync-6fppq\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:23 crc kubenswrapper[5024]: I1007 13:22:23.511755 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-6fppq" Oct 07 13:22:24 crc kubenswrapper[5024]: I1007 13:22:24.113738 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-6fppq"] Oct 07 13:22:24 crc kubenswrapper[5024]: I1007 13:22:24.752825 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:22:24 crc kubenswrapper[5024]: E1007 13:22:24.753778 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:22:25 crc kubenswrapper[5024]: I1007 13:22:25.021223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-6fppq" event={"ID":"aad68430-9c4b-4a22-b2a3-417a796af04b","Type":"ContainerStarted","Data":"abda36d4ca089e543b35b86e6aa520f3a0b24b670cab0a471fad76db89282b23"} Oct 07 13:22:28 crc kubenswrapper[5024]: I1007 13:22:28.073959 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:28 crc kubenswrapper[5024]: I1007 13:22:28.222564 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.363675 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.363733 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.405050 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.420245 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.570919 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.570976 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.618301 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 07 13:22:29 crc kubenswrapper[5024]: I1007 13:22:29.619934 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.071314 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.071360 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.071373 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.071386 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.610507 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Oct 07 13:22:30 crc kubenswrapper[5024]: I1007 13:22:30.897940 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6748775596-w8q6s" podUID="19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.094569 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.095094 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.095093 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.095157 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.094970 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-6fppq" event={"ID":"aad68430-9c4b-4a22-b2a3-417a796af04b","Type":"ContainerStarted","Data":"49091536d96d3757b29ec84b40c9bad72868024751b83cbe1b55f663a15d3599"} Oct 07 13:22:32 crc kubenswrapper[5024]: I1007 13:22:32.123432 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-6fppq" podStartSLOduration=2.110860767 podStartE2EDuration="9.123413228s" podCreationTimestamp="2025-10-07 13:22:23 +0000 UTC" firstStartedPulling="2025-10-07 13:22:24.118250448 +0000 UTC m=+3282.194037286" lastFinishedPulling="2025-10-07 13:22:31.130802909 +0000 UTC m=+3289.206589747" observedRunningTime="2025-10-07 13:22:32.116598071 +0000 UTC m=+3290.192384909" watchObservedRunningTime="2025-10-07 13:22:32.123413228 +0000 UTC m=+3290.199200066" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.658939 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.659476 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.661494 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.683752 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.683888 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 07 13:22:34 crc kubenswrapper[5024]: I1007 13:22:34.689185 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.554680 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.561043 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.566723 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.634259 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.634321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.634417 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56qf\" (UniqueName: \"kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.736577 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.736651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.736758 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56qf\" (UniqueName: \"kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.737766 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.738036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.772402 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56qf\" (UniqueName: \"kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf\") pod \"redhat-operators-7kfnj\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:37 crc kubenswrapper[5024]: I1007 13:22:37.946057 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:38 crc kubenswrapper[5024]: I1007 13:22:38.470827 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:22:39 crc kubenswrapper[5024]: I1007 13:22:39.174714 5024 generic.go:334] "Generic (PLEG): container finished" podID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerID="8e05b5fff30621b65569054299e5fac8a88c18835fdb5441c3503eb5b55a8477" exitCode=0 Oct 07 13:22:39 crc kubenswrapper[5024]: I1007 13:22:39.175261 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerDied","Data":"8e05b5fff30621b65569054299e5fac8a88c18835fdb5441c3503eb5b55a8477"} Oct 07 13:22:39 crc kubenswrapper[5024]: I1007 13:22:39.175305 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerStarted","Data":"f611b3cab0a21661eddcb9dfb8aeecba18464906658af14e8e5b07634f7038b3"} Oct 07 13:22:39 crc kubenswrapper[5024]: I1007 13:22:39.751911 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:22:39 crc kubenswrapper[5024]: E1007 13:22:39.752242 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:22:40 crc kubenswrapper[5024]: I1007 13:22:40.609378 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Oct 07 13:22:40 crc kubenswrapper[5024]: I1007 13:22:40.896422 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6748775596-w8q6s" podUID="19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Oct 07 13:22:42 crc kubenswrapper[5024]: I1007 13:22:42.207007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerStarted","Data":"2e4286714c1ca8b284240c76ec8ed3ade152623a7d0204e1685244aec5fb0d60"} Oct 07 13:22:49 crc kubenswrapper[5024]: I1007 13:22:49.301780 5024 generic.go:334] "Generic (PLEG): container finished" podID="6803598d-855a-4bca-bf80-3427b6b516f3" containerID="0caeb7f74530ccbe28fe457cda7a6618a4a9ef4b74dd87e92d39c33040d43f77" exitCode=137 Oct 07 13:22:49 crc kubenswrapper[5024]: I1007 13:22:49.302529 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerDied","Data":"0caeb7f74530ccbe28fe457cda7a6618a4a9ef4b74dd87e92d39c33040d43f77"} Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.315786 5024 generic.go:334] "Generic (PLEG): container finished" podID="6803598d-855a-4bca-bf80-3427b6b516f3" containerID="702abaede61160b54f412c70077321a58fd2e447f5b3ecd343f76774cca95d11" exitCode=137 Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.315860 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerDied","Data":"702abaede61160b54f412c70077321a58fd2e447f5b3ecd343f76774cca95d11"} Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.318828 5024 generic.go:334] "Generic (PLEG): container finished" podID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerID="81923d13bdcea8f82d78a1af65d86dbaf0c2a59e592fc6134882437bbad17bb0" exitCode=137 Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.318879 5024 generic.go:334] "Generic (PLEG): container finished" podID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerID="3f31f975f6f8e6ea362b17462ce5ec881b0b750045b35c230f48f5cc59b9a3ae" exitCode=137 Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.318900 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerDied","Data":"81923d13bdcea8f82d78a1af65d86dbaf0c2a59e592fc6134882437bbad17bb0"} Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.318928 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerDied","Data":"3f31f975f6f8e6ea362b17462ce5ec881b0b750045b35c230f48f5cc59b9a3ae"} Oct 07 13:22:50 crc kubenswrapper[5024]: I1007 13:22:50.752097 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:22:50 crc kubenswrapper[5024]: E1007 13:22:50.752591 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.062645 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.065393 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100542 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twfd4\" (UniqueName: \"kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4\") pod \"de1c7793-5541-45f4-ba71-a2094dcd051d\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100618 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs\") pod \"6803598d-855a-4bca-bf80-3427b6b516f3\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100737 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts\") pod \"6803598d-855a-4bca-bf80-3427b6b516f3\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100810 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vftdp\" (UniqueName: \"kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp\") pod \"6803598d-855a-4bca-bf80-3427b6b516f3\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100894 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key\") pod \"de1c7793-5541-45f4-ba71-a2094dcd051d\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100935 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key\") pod \"6803598d-855a-4bca-bf80-3427b6b516f3\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.100993 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs\") pod \"de1c7793-5541-45f4-ba71-a2094dcd051d\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.101122 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data\") pod \"6803598d-855a-4bca-bf80-3427b6b516f3\" (UID: \"6803598d-855a-4bca-bf80-3427b6b516f3\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.101239 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts\") pod \"de1c7793-5541-45f4-ba71-a2094dcd051d\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.101272 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data\") pod \"de1c7793-5541-45f4-ba71-a2094dcd051d\" (UID: \"de1c7793-5541-45f4-ba71-a2094dcd051d\") " Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.104325 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs" (OuterVolumeSpecName: "logs") pod "de1c7793-5541-45f4-ba71-a2094dcd051d" (UID: "de1c7793-5541-45f4-ba71-a2094dcd051d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.122075 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4" (OuterVolumeSpecName: "kube-api-access-twfd4") pod "de1c7793-5541-45f4-ba71-a2094dcd051d" (UID: "de1c7793-5541-45f4-ba71-a2094dcd051d"). InnerVolumeSpecName "kube-api-access-twfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.122119 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6803598d-855a-4bca-bf80-3427b6b516f3" (UID: "6803598d-855a-4bca-bf80-3427b6b516f3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.138297 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs" (OuterVolumeSpecName: "logs") pod "6803598d-855a-4bca-bf80-3427b6b516f3" (UID: "6803598d-855a-4bca-bf80-3427b6b516f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.139209 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp" (OuterVolumeSpecName: "kube-api-access-vftdp") pod "6803598d-855a-4bca-bf80-3427b6b516f3" (UID: "6803598d-855a-4bca-bf80-3427b6b516f3"). InnerVolumeSpecName "kube-api-access-vftdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.139758 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "de1c7793-5541-45f4-ba71-a2094dcd051d" (UID: "de1c7793-5541-45f4-ba71-a2094dcd051d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.157884 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts" (OuterVolumeSpecName: "scripts") pod "6803598d-855a-4bca-bf80-3427b6b516f3" (UID: "6803598d-855a-4bca-bf80-3427b6b516f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.159954 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts" (OuterVolumeSpecName: "scripts") pod "de1c7793-5541-45f4-ba71-a2094dcd051d" (UID: "de1c7793-5541-45f4-ba71-a2094dcd051d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.165234 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data" (OuterVolumeSpecName: "config-data") pod "6803598d-855a-4bca-bf80-3427b6b516f3" (UID: "6803598d-855a-4bca-bf80-3427b6b516f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.168384 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data" (OuterVolumeSpecName: "config-data") pod "de1c7793-5541-45f4-ba71-a2094dcd051d" (UID: "de1c7793-5541-45f4-ba71-a2094dcd051d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204641 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vftdp\" (UniqueName: \"kubernetes.io/projected/6803598d-855a-4bca-bf80-3427b6b516f3-kube-api-access-vftdp\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204685 5024 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de1c7793-5541-45f4-ba71-a2094dcd051d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204695 5024 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6803598d-855a-4bca-bf80-3427b6b516f3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204704 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1c7793-5541-45f4-ba71-a2094dcd051d-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204715 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204724 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204732 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de1c7793-5541-45f4-ba71-a2094dcd051d-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204743 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twfd4\" (UniqueName: \"kubernetes.io/projected/de1c7793-5541-45f4-ba71-a2094dcd051d-kube-api-access-twfd4\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204751 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6803598d-855a-4bca-bf80-3427b6b516f3-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.204758 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6803598d-855a-4bca-bf80-3427b6b516f3-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.350395 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686c88cb4f-mfxbx" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.350380 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686c88cb4f-mfxbx" event={"ID":"6803598d-855a-4bca-bf80-3427b6b516f3","Type":"ContainerDied","Data":"3bcfdb7842fee653ff28a17ea7dc21269036aa976430c39a8b288fc052cc792e"} Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.350555 5024 scope.go:117] "RemoveContainer" containerID="702abaede61160b54f412c70077321a58fd2e447f5b3ecd343f76774cca95d11" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.354915 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c888bc74f-qhpjd" event={"ID":"de1c7793-5541-45f4-ba71-a2094dcd051d","Type":"ContainerDied","Data":"d5dbfb080dce90d6b735930d37f09d859f96ad9a5823c2b6341295974670ae83"} Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.354936 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c888bc74f-qhpjd" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.393859 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.402710 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-686c88cb4f-mfxbx"] Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.417129 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.426692 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c888bc74f-qhpjd"] Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.522732 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.611886 5024 scope.go:117] "RemoveContainer" containerID="0caeb7f74530ccbe28fe457cda7a6618a4a9ef4b74dd87e92d39c33040d43f77" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.663070 5024 scope.go:117] "RemoveContainer" containerID="81923d13bdcea8f82d78a1af65d86dbaf0c2a59e592fc6134882437bbad17bb0" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.777690 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" path="/var/lib/kubelet/pods/6803598d-855a-4bca-bf80-3427b6b516f3/volumes" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.780873 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" path="/var/lib/kubelet/pods/de1c7793-5541-45f4-ba71-a2094dcd051d/volumes" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.788059 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:52 crc kubenswrapper[5024]: I1007 13:22:52.891684 5024 scope.go:117] "RemoveContainer" containerID="3f31f975f6f8e6ea362b17462ce5ec881b0b750045b35c230f48f5cc59b9a3ae" Oct 07 13:22:54 crc kubenswrapper[5024]: I1007 13:22:54.328876 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:22:54 crc kubenswrapper[5024]: I1007 13:22:54.565939 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6748775596-w8q6s" Oct 07 13:22:54 crc kubenswrapper[5024]: I1007 13:22:54.627595 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:22:54 crc kubenswrapper[5024]: I1007 13:22:54.627965 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon-log" containerID="cri-o://0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23" gracePeriod=30 Oct 07 13:22:54 crc kubenswrapper[5024]: I1007 13:22:54.628170 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" containerID="cri-o://2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1" gracePeriod=30 Oct 07 13:22:55 crc kubenswrapper[5024]: I1007 13:22:55.418517 5024 generic.go:334] "Generic (PLEG): container finished" podID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerID="2e4286714c1ca8b284240c76ec8ed3ade152623a7d0204e1685244aec5fb0d60" exitCode=0 Oct 07 13:22:55 crc kubenswrapper[5024]: I1007 13:22:55.418860 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerDied","Data":"2e4286714c1ca8b284240c76ec8ed3ade152623a7d0204e1685244aec5fb0d60"} Oct 07 13:22:55 crc kubenswrapper[5024]: I1007 13:22:55.421486 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:22:56 crc kubenswrapper[5024]: I1007 13:22:56.431114 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerStarted","Data":"cda5464159d98988333e73e6053158c768f8efee60c3913f6f90b0aaedce8f4f"} Oct 07 13:22:56 crc kubenswrapper[5024]: I1007 13:22:56.463224 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kfnj" podStartSLOduration=2.512964963 podStartE2EDuration="19.463200389s" podCreationTimestamp="2025-10-07 13:22:37 +0000 UTC" firstStartedPulling="2025-10-07 13:22:39.181046683 +0000 UTC m=+3297.256833511" lastFinishedPulling="2025-10-07 13:22:56.131282079 +0000 UTC m=+3314.207068937" observedRunningTime="2025-10-07 13:22:56.45216151 +0000 UTC m=+3314.527948348" watchObservedRunningTime="2025-10-07 13:22:56.463200389 +0000 UTC m=+3314.538987217" Oct 07 13:22:57 crc kubenswrapper[5024]: I1007 13:22:57.947024 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:57 crc kubenswrapper[5024]: I1007 13:22:57.947555 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:22:58 crc kubenswrapper[5024]: I1007 13:22:58.454097 5024 generic.go:334] "Generic (PLEG): container finished" podID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerID="2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1" exitCode=0 Oct 07 13:22:58 crc kubenswrapper[5024]: I1007 13:22:58.454172 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerDied","Data":"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1"} Oct 07 13:22:59 crc kubenswrapper[5024]: I1007 13:22:59.010153 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kfnj" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="registry-server" probeResult="failure" output=< Oct 07 13:22:59 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:22:59 crc kubenswrapper[5024]: > Oct 07 13:23:00 crc kubenswrapper[5024]: I1007 13:23:00.609471 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Oct 07 13:23:04 crc kubenswrapper[5024]: I1007 13:23:04.520585 5024 generic.go:334] "Generic (PLEG): container finished" podID="aad68430-9c4b-4a22-b2a3-417a796af04b" containerID="49091536d96d3757b29ec84b40c9bad72868024751b83cbe1b55f663a15d3599" exitCode=0 Oct 07 13:23:04 crc kubenswrapper[5024]: I1007 13:23:04.520649 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-6fppq" event={"ID":"aad68430-9c4b-4a22-b2a3-417a796af04b","Type":"ContainerDied","Data":"49091536d96d3757b29ec84b40c9bad72868024751b83cbe1b55f663a15d3599"} Oct 07 13:23:05 crc kubenswrapper[5024]: I1007 13:23:05.751209 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:23:05 crc kubenswrapper[5024]: E1007 13:23:05.751567 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.040060 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-6fppq" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.156064 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle\") pod \"aad68430-9c4b-4a22-b2a3-417a796af04b\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.156376 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data\") pod \"aad68430-9c4b-4a22-b2a3-417a796af04b\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.156430 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data\") pod \"aad68430-9c4b-4a22-b2a3-417a796af04b\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.156467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdjtk\" (UniqueName: \"kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk\") pod \"aad68430-9c4b-4a22-b2a3-417a796af04b\" (UID: \"aad68430-9c4b-4a22-b2a3-417a796af04b\") " Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.165092 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "aad68430-9c4b-4a22-b2a3-417a796af04b" (UID: "aad68430-9c4b-4a22-b2a3-417a796af04b"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.165935 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk" (OuterVolumeSpecName: "kube-api-access-qdjtk") pod "aad68430-9c4b-4a22-b2a3-417a796af04b" (UID: "aad68430-9c4b-4a22-b2a3-417a796af04b"). InnerVolumeSpecName "kube-api-access-qdjtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.168771 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data" (OuterVolumeSpecName: "config-data") pod "aad68430-9c4b-4a22-b2a3-417a796af04b" (UID: "aad68430-9c4b-4a22-b2a3-417a796af04b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.195853 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad68430-9c4b-4a22-b2a3-417a796af04b" (UID: "aad68430-9c4b-4a22-b2a3-417a796af04b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.258819 5024 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-job-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.258864 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.258880 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdjtk\" (UniqueName: \"kubernetes.io/projected/aad68430-9c4b-4a22-b2a3-417a796af04b-kube-api-access-qdjtk\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.258896 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad68430-9c4b-4a22-b2a3-417a796af04b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.552029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-6fppq" event={"ID":"aad68430-9c4b-4a22-b2a3-417a796af04b","Type":"ContainerDied","Data":"abda36d4ca089e543b35b86e6aa520f3a0b24b670cab0a471fad76db89282b23"} Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.552110 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abda36d4ca089e543b35b86e6aa520f3a0b24b670cab0a471fad76db89282b23" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.552656 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-6fppq" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989227 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:06 crc kubenswrapper[5024]: E1007 13:23:06.989676 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989694 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: E1007 13:23:06.989718 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989730 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: E1007 13:23:06.989765 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad68430-9c4b-4a22-b2a3-417a796af04b" containerName="manila-db-sync" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989773 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad68430-9c4b-4a22-b2a3-417a796af04b" containerName="manila-db-sync" Oct 07 13:23:06 crc kubenswrapper[5024]: E1007 13:23:06.989791 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989799 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: E1007 13:23:06.989816 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.989824 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.990024 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.990034 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad68430-9c4b-4a22-b2a3-417a796af04b" containerName="manila-db-sync" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.990045 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6803598d-855a-4bca-bf80-3427b6b516f3" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.990059 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.990081 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1c7793-5541-45f4-ba71-a2094dcd051d" containerName="horizon-log" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.991260 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.993621 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.994441 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.994639 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-269wc" Oct 07 13:23:06 crc kubenswrapper[5024]: I1007 13:23:06.994849 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.004522 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.006595 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.010110 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.024509 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.036329 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.106629 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-d2sxl"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.108594 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.123851 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-d2sxl"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.182534 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.182932 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183210 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183396 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183601 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86696\" (UniqueName: \"kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183654 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.183927 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4w9r\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.184029 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.184127 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.184196 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.184266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.184300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.254977 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.256856 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.262545 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.267935 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.312605 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-config\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.312746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.312828 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.312893 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.312988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5f7l\" (UniqueName: \"kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313072 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313245 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313296 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86696\" (UniqueName: \"kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313362 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313390 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313434 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313458 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9s8\" (UniqueName: \"kubernetes.io/projected/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-kube-api-access-xg9s8\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313501 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4w9r\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313551 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313616 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313650 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.313730 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314090 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314174 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314302 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314463 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.314517 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.315364 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.321487 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.321519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.321651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.322337 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.324642 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.324908 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.325362 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.329595 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.336944 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4w9r\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r\") pod \"manila-share-share1-0\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.337069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.337641 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86696\" (UniqueName: \"kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696\") pod \"manila-scheduler-0\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416112 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416242 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-config\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416317 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5f7l\" (UniqueName: \"kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416387 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416431 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416478 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9s8\" (UniqueName: \"kubernetes.io/projected/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-kube-api-access-xg9s8\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416581 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.416624 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.417248 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.418882 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.418984 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.419805 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.419876 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.420117 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.420813 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-config\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.424536 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.425206 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.425770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.425962 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.435289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9s8\" (UniqueName: \"kubernetes.io/projected/82b8c291-f0c7-4944-8d91-46d7e4b16fb0-kube-api-access-xg9s8\") pod \"dnsmasq-dns-76b5fdb995-d2sxl\" (UID: \"82b8c291-f0c7-4944-8d91-46d7e4b16fb0\") " pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.439735 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5f7l\" (UniqueName: \"kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l\") pod \"manila-api-0\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.517712 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.625838 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.636607 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:07 crc kubenswrapper[5024]: I1007 13:23:07.733724 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.013542 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.081887 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.207421 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.278395 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.372976 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:08 crc kubenswrapper[5024]: W1007 13:23:08.387392 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc5cd2d0_f5b1_4c85_931d_0df8bd1a3ff1.slice/crio-d7046281d72d557b3b2b0d48844c01de7b60e0029b9eb8716a48ca4c32857d7c WatchSource:0}: Error finding container d7046281d72d557b3b2b0d48844c01de7b60e0029b9eb8716a48ca4c32857d7c: Status 404 returned error can't find the container with id d7046281d72d557b3b2b0d48844c01de7b60e0029b9eb8716a48ca4c32857d7c Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.430477 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-d2sxl"] Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.595822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" event={"ID":"82b8c291-f0c7-4944-8d91-46d7e4b16fb0","Type":"ContainerStarted","Data":"62ce15f9213265dfebb88bf56ed597a9bc0581dc74025c35931cae849d4d8d14"} Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.602939 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerStarted","Data":"2156e1b9f8e52604cc24c88232cd1d475518846058d3249f6a3750820f6fe87e"} Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.606287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerStarted","Data":"d7046281d72d557b3b2b0d48844c01de7b60e0029b9eb8716a48ca4c32857d7c"} Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.607534 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerStarted","Data":"cdfa46d1e7142aed06cd9c9b0be7c39c03c2e49155a70a218338ee80de0fe027"} Oct 07 13:23:08 crc kubenswrapper[5024]: I1007 13:23:08.772452 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.632577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerStarted","Data":"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936"} Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.633238 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerStarted","Data":"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530"} Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.633266 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.653119 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.657683 5024 generic.go:334] "Generic (PLEG): container finished" podID="82b8c291-f0c7-4944-8d91-46d7e4b16fb0" containerID="580d97a806120efff30c57142bcc8818d58531fad308600194044e8f28170fa6" exitCode=0 Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.657950 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kfnj" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="registry-server" containerID="cri-o://cda5464159d98988333e73e6053158c768f8efee60c3913f6f90b0aaedce8f4f" gracePeriod=2 Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.658067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" event={"ID":"82b8c291-f0c7-4944-8d91-46d7e4b16fb0","Type":"ContainerDied","Data":"580d97a806120efff30c57142bcc8818d58531fad308600194044e8f28170fa6"} Oct 07 13:23:09 crc kubenswrapper[5024]: I1007 13:23:09.660775 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.660753096 podStartE2EDuration="2.660753096s" podCreationTimestamp="2025-10-07 13:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:23:09.658079059 +0000 UTC m=+3327.733865907" watchObservedRunningTime="2025-10-07 13:23:09.660753096 +0000 UTC m=+3327.736539934" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.609486 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.673030 5024 generic.go:334] "Generic (PLEG): container finished" podID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerID="cda5464159d98988333e73e6053158c768f8efee60c3913f6f90b0aaedce8f4f" exitCode=0 Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.673097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerDied","Data":"cda5464159d98988333e73e6053158c768f8efee60c3913f6f90b0aaedce8f4f"} Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.673129 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kfnj" event={"ID":"1c854e07-cc2f-4944-b520-38e55f3ae348","Type":"ContainerDied","Data":"f611b3cab0a21661eddcb9dfb8aeecba18464906658af14e8e5b07634f7038b3"} Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.673160 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f611b3cab0a21661eddcb9dfb8aeecba18464906658af14e8e5b07634f7038b3" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.683415 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.690811 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" event={"ID":"82b8c291-f0c7-4944-8d91-46d7e4b16fb0","Type":"ContainerStarted","Data":"b65041e7bd8a9bbb93718adc489c9dc84e83c8e272834b75ee2394cd4caa8709"} Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.690995 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.702937 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerStarted","Data":"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012"} Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.702995 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerStarted","Data":"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746"} Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.732186 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56qf\" (UniqueName: \"kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf\") pod \"1c854e07-cc2f-4944-b520-38e55f3ae348\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.732467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities\") pod \"1c854e07-cc2f-4944-b520-38e55f3ae348\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.732575 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content\") pod \"1c854e07-cc2f-4944-b520-38e55f3ae348\" (UID: \"1c854e07-cc2f-4944-b520-38e55f3ae348\") " Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.739482 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities" (OuterVolumeSpecName: "utilities") pod "1c854e07-cc2f-4944-b520-38e55f3ae348" (UID: "1c854e07-cc2f-4944-b520-38e55f3ae348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.742190 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.998068392 podStartE2EDuration="4.742164264s" podCreationTimestamp="2025-10-07 13:23:06 +0000 UTC" firstStartedPulling="2025-10-07 13:23:08.302358018 +0000 UTC m=+3326.378144856" lastFinishedPulling="2025-10-07 13:23:09.04645389 +0000 UTC m=+3327.122240728" observedRunningTime="2025-10-07 13:23:10.727349385 +0000 UTC m=+3328.803136233" watchObservedRunningTime="2025-10-07 13:23:10.742164264 +0000 UTC m=+3328.817951102" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.742557 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf" (OuterVolumeSpecName: "kube-api-access-g56qf") pod "1c854e07-cc2f-4944-b520-38e55f3ae348" (UID: "1c854e07-cc2f-4944-b520-38e55f3ae348"). InnerVolumeSpecName "kube-api-access-g56qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.763175 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" podStartSLOduration=3.763150281 podStartE2EDuration="3.763150281s" podCreationTimestamp="2025-10-07 13:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:23:10.754338136 +0000 UTC m=+3328.830124984" watchObservedRunningTime="2025-10-07 13:23:10.763150281 +0000 UTC m=+3328.838937119" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.835394 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.835433 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56qf\" (UniqueName: \"kubernetes.io/projected/1c854e07-cc2f-4944-b520-38e55f3ae348-kube-api-access-g56qf\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.849025 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c854e07-cc2f-4944-b520-38e55f3ae348" (UID: "1c854e07-cc2f-4944-b520-38e55f3ae348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:10 crc kubenswrapper[5024]: I1007 13:23:10.938448 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c854e07-cc2f-4944-b520-38e55f3ae348-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:11 crc kubenswrapper[5024]: I1007 13:23:11.709824 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kfnj" Oct 07 13:23:11 crc kubenswrapper[5024]: I1007 13:23:11.710686 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api-log" containerID="cri-o://5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" gracePeriod=30 Oct 07 13:23:11 crc kubenswrapper[5024]: I1007 13:23:11.710726 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api" containerID="cri-o://7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" gracePeriod=30 Oct 07 13:23:11 crc kubenswrapper[5024]: I1007 13:23:11.751119 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:23:11 crc kubenswrapper[5024]: I1007 13:23:11.761755 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kfnj"] Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.443608 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475450 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475583 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475684 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475717 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5f7l\" (UniqueName: \"kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.475840 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id\") pod \"a2632b1f-8d01-40b4-9d68-27677375c783\" (UID: \"a2632b1f-8d01-40b4-9d68-27677375c783\") " Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.476429 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.479090 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs" (OuterVolumeSpecName: "logs") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.516268 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts" (OuterVolumeSpecName: "scripts") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.517730 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.534334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.541207 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l" (OuterVolumeSpecName: "kube-api-access-k5f7l") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "kube-api-access-k5f7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579805 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579849 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579859 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2632b1f-8d01-40b4-9d68-27677375c783-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579870 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579880 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5f7l\" (UniqueName: \"kubernetes.io/projected/a2632b1f-8d01-40b4-9d68-27677375c783-kube-api-access-k5f7l\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.579892 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2632b1f-8d01-40b4-9d68-27677375c783-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.609047 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data" (OuterVolumeSpecName: "config-data") pod "a2632b1f-8d01-40b4-9d68-27677375c783" (UID: "a2632b1f-8d01-40b4-9d68-27677375c783"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.682004 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2632b1f-8d01-40b4-9d68-27677375c783-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728552 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2632b1f-8d01-40b4-9d68-27677375c783" containerID="7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" exitCode=0 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728591 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2632b1f-8d01-40b4-9d68-27677375c783" containerID="5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" exitCode=143 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728622 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerDied","Data":"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936"} Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728655 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerDied","Data":"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530"} Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728666 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a2632b1f-8d01-40b4-9d68-27677375c783","Type":"ContainerDied","Data":"cdfa46d1e7142aed06cd9c9b0be7c39c03c2e49155a70a218338ee80de0fe027"} Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728684 5024 scope.go:117] "RemoveContainer" containerID="7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.728829 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.775064 5024 scope.go:117] "RemoveContainer" containerID="5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.779264 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" path="/var/lib/kubelet/pods/1c854e07-cc2f-4944-b520-38e55f3ae348/volumes" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.781671 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.795967 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.805396 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.805915 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="extract-utilities" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.805928 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="extract-utilities" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.805946 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.805952 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.805972 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api-log" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.805979 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api-log" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.805989 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="extract-content" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.805995 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="extract-content" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.806012 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="registry-server" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.806018 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="registry-server" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.806229 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c854e07-cc2f-4944-b520-38e55f3ae348" containerName="registry-server" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.806247 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api-log" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.806276 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" containerName="manila-api" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.807391 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.811117 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.811369 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.811930 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.818250 5024 scope.go:117] "RemoveContainer" containerID="7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.827424 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936\": container with ID starting with 7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936 not found: ID does not exist" containerID="7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.827476 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936"} err="failed to get container status \"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936\": rpc error: code = NotFound desc = could not find container \"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936\": container with ID starting with 7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936 not found: ID does not exist" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.827506 5024 scope.go:117] "RemoveContainer" containerID="5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" Oct 07 13:23:12 crc kubenswrapper[5024]: E1007 13:23:12.831912 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530\": container with ID starting with 5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530 not found: ID does not exist" containerID="5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.831968 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530"} err="failed to get container status \"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530\": rpc error: code = NotFound desc = could not find container \"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530\": container with ID starting with 5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530 not found: ID does not exist" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.831997 5024 scope.go:117] "RemoveContainer" containerID="7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.832694 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936"} err="failed to get container status \"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936\": rpc error: code = NotFound desc = could not find container \"7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936\": container with ID starting with 7d2798ce9ab6b899fe5fbadba4a377e53585c14962e92c198201b22bb59fb936 not found: ID does not exist" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.832717 5024 scope.go:117] "RemoveContainer" containerID="5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.832894 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530"} err="failed to get container status \"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530\": rpc error: code = NotFound desc = could not find container \"5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530\": container with ID starting with 5787f245ab621147eea6231979988c0d6fe6d93292721fcbc22e58d56a7f1530 not found: ID does not exist" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.836007 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.891994 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.892325 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-central-agent" containerID="cri-o://707516c3036a055bf960a5c4c70727477b998fd450cfa93a7c2f93395b9d0435" gracePeriod=30 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.892391 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-notification-agent" containerID="cri-o://e31fe92b5980eae90855f7b387d75664850fbe569754c45ced756cad9e481862" gracePeriod=30 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.892417 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="sg-core" containerID="cri-o://ad52b5b371f3cebd67a868f43b4a771168905dd6e8a93e2733037d3f3b66f522" gracePeriod=30 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.892347 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="proxy-httpd" containerID="cri-o://1d7d19c0a3eba91dd93ae7770e4ff472fc0407cfb5d5c8a699a36f5d909c92e5" gracePeriod=30 Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989348 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccccf784-0422-43bf-9926-c887daea816f-logs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989654 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989718 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89rf\" (UniqueName: \"kubernetes.io/projected/ccccf784-0422-43bf-9926-c887daea816f-kube-api-access-t89rf\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989743 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccccf784-0422-43bf-9926-c887daea816f-etc-machine-id\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989761 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-scripts\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989806 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-public-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:12 crc kubenswrapper[5024]: I1007 13:23:12.989869 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data-custom\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccccf784-0422-43bf-9926-c887daea816f-etc-machine-id\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-scripts\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091497 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091524 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-public-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091544 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data-custom\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccccf784-0422-43bf-9926-c887daea816f-logs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091681 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.091736 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89rf\" (UniqueName: \"kubernetes.io/projected/ccccf784-0422-43bf-9926-c887daea816f-kube-api-access-t89rf\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.092859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccccf784-0422-43bf-9926-c887daea816f-etc-machine-id\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.094989 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccccf784-0422-43bf-9926-c887daea816f-logs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.099710 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.120247 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.120676 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-public-tls-certs\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.120818 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-config-data-custom\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.121261 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.122250 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccccf784-0422-43bf-9926-c887daea816f-scripts\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.133571 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89rf\" (UniqueName: \"kubernetes.io/projected/ccccf784-0422-43bf-9926-c887daea816f-kube-api-access-t89rf\") pod \"manila-api-0\" (UID: \"ccccf784-0422-43bf-9926-c887daea816f\") " pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.196117 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785056 5024 generic.go:334] "Generic (PLEG): container finished" podID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerID="1d7d19c0a3eba91dd93ae7770e4ff472fc0407cfb5d5c8a699a36f5d909c92e5" exitCode=0 Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785407 5024 generic.go:334] "Generic (PLEG): container finished" podID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerID="ad52b5b371f3cebd67a868f43b4a771168905dd6e8a93e2733037d3f3b66f522" exitCode=2 Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785417 5024 generic.go:334] "Generic (PLEG): container finished" podID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerID="707516c3036a055bf960a5c4c70727477b998fd450cfa93a7c2f93395b9d0435" exitCode=0 Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785445 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerDied","Data":"1d7d19c0a3eba91dd93ae7770e4ff472fc0407cfb5d5c8a699a36f5d909c92e5"} Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785479 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerDied","Data":"ad52b5b371f3cebd67a868f43b4a771168905dd6e8a93e2733037d3f3b66f522"} Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.785489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerDied","Data":"707516c3036a055bf960a5c4c70727477b998fd450cfa93a7c2f93395b9d0435"} Oct 07 13:23:13 crc kubenswrapper[5024]: I1007 13:23:13.941258 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 07 13:23:14 crc kubenswrapper[5024]: I1007 13:23:14.776660 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2632b1f-8d01-40b4-9d68-27677375c783" path="/var/lib/kubelet/pods/a2632b1f-8d01-40b4-9d68-27677375c783/volumes" Oct 07 13:23:14 crc kubenswrapper[5024]: I1007 13:23:14.800348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ccccf784-0422-43bf-9926-c887daea816f","Type":"ContainerStarted","Data":"f9c7c0ff9f2fca95269bcb38ead52e00ba45a83844943a3de2be50ad3dea36b5"} Oct 07 13:23:16 crc kubenswrapper[5024]: I1007 13:23:16.822180 5024 generic.go:334] "Generic (PLEG): container finished" podID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerID="e31fe92b5980eae90855f7b387d75664850fbe569754c45ced756cad9e481862" exitCode=0 Oct 07 13:23:16 crc kubenswrapper[5024]: I1007 13:23:16.822426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerDied","Data":"e31fe92b5980eae90855f7b387d75664850fbe569754c45ced756cad9e481862"} Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.474596 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.505443 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506239 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506387 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx8cc\" (UniqueName: \"kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506877 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.506921 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle\") pod \"aad19176-43cb-4ebc-8605-8d619f32e96b\" (UID: \"aad19176-43cb-4ebc-8605-8d619f32e96b\") " Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.512036 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.516221 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts" (OuterVolumeSpecName: "scripts") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.520699 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.531679 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc" (OuterVolumeSpecName: "kube-api-access-lx8cc") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "kube-api-access-lx8cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.609848 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.609892 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aad19176-43cb-4ebc-8605-8d619f32e96b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.609919 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx8cc\" (UniqueName: \"kubernetes.io/projected/aad19176-43cb-4ebc-8605-8d619f32e96b-kube-api-access-lx8cc\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.609931 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.642296 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.736013 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-d2sxl" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.803474 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.814956 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.852437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aad19176-43cb-4ebc-8605-8d619f32e96b","Type":"ContainerDied","Data":"9cca98d575d7067633a859b94857083830cb49208aeb031c6047776f89f24b95"} Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.852515 5024 scope.go:117] "RemoveContainer" containerID="1d7d19c0a3eba91dd93ae7770e4ff472fc0407cfb5d5c8a699a36f5d909c92e5" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.852763 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.865922 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.867092 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="dnsmasq-dns" containerID="cri-o://3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749" gracePeriod=10 Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.902508 5024 scope.go:117] "RemoveContainer" containerID="ad52b5b371f3cebd67a868f43b4a771168905dd6e8a93e2733037d3f3b66f522" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.979861 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:17 crc kubenswrapper[5024]: I1007 13:23:17.987439 5024 scope.go:117] "RemoveContainer" containerID="e31fe92b5980eae90855f7b387d75664850fbe569754c45ced756cad9e481862" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.001319 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.012096 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data" (OuterVolumeSpecName: "config-data") pod "aad19176-43cb-4ebc-8605-8d619f32e96b" (UID: "aad19176-43cb-4ebc-8605-8d619f32e96b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.018744 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.018782 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.018792 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad19176-43cb-4ebc-8605-8d619f32e96b-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.028921 5024 scope.go:117] "RemoveContainer" containerID="707516c3036a055bf960a5c4c70727477b998fd450cfa93a7c2f93395b9d0435" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.373024 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.383325 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.414589 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.415409 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-central-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415431 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-central-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.415484 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-notification-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415493 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-notification-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.415513 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="proxy-httpd" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415521 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="proxy-httpd" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.415539 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="sg-core" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415549 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="sg-core" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415794 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="sg-core" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415810 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="proxy-httpd" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415842 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-notification-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.415852 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" containerName="ceilometer-central-agent" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.419216 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.422189 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.422378 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.422572 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.442886 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.444515 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535387 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535457 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7zn\" (UniqueName: \"kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535537 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535599 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535693 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535736 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.535843 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.638789 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.638877 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.638914 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639027 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639167 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639204 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52qxw\" (UniqueName: \"kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw\") pod \"e59a7592-dced-40e5-abb1-d85862ca5ac7\" (UID: \"e59a7592-dced-40e5-abb1-d85862ca5ac7\") " Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639408 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639466 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639576 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639651 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639704 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.639728 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7zn\" (UniqueName: \"kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.648605 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.648649 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.656392 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.656848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.658327 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.659838 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.662345 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.667454 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7zn\" (UniqueName: \"kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn\") pod \"ceilometer-0\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.673071 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw" (OuterVolumeSpecName: "kube-api-access-52qxw") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "kube-api-access-52qxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.723549 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.725649 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.727582 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config" (OuterVolumeSpecName: "config") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.735943 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.742341 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-config\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.742379 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.742393 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.742401 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.742411 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52qxw\" (UniqueName: \"kubernetes.io/projected/e59a7592-dced-40e5-abb1-d85862ca5ac7-kube-api-access-52qxw\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.756542 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.756939 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.766421 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad19176-43cb-4ebc-8605-8d619f32e96b" path="/var/lib/kubelet/pods/aad19176-43cb-4ebc-8605-8d619f32e96b/volumes" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.769857 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.770872 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e59a7592-dced-40e5-abb1-d85862ca5ac7" (UID: "e59a7592-dced-40e5-abb1-d85862ca5ac7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.845498 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59a7592-dced-40e5-abb1-d85862ca5ac7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.872031 5024 generic.go:334] "Generic (PLEG): container finished" podID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerID="3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749" exitCode=0 Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.872102 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" event={"ID":"e59a7592-dced-40e5-abb1-d85862ca5ac7","Type":"ContainerDied","Data":"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.872150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" event={"ID":"e59a7592-dced-40e5-abb1-d85862ca5ac7","Type":"ContainerDied","Data":"79f1d80a07e950eecb3122693e459acb7d8c468b4665521fbca059d0d0f78e41"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.872169 5024 scope.go:117] "RemoveContainer" containerID="3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.872325 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-nd8ds" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.894171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ccccf784-0422-43bf-9926-c887daea816f","Type":"ContainerStarted","Data":"142a1aa7f2c0820d7953016d227605c7464fd1c59a8cfa2911bc02b4d8345865"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.894218 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ccccf784-0422-43bf-9926-c887daea816f","Type":"ContainerStarted","Data":"cad16de7f0dc3374eccdc31eea5b873dd7e16f08a9242854ae99c0640967b956"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.894540 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.918880 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerStarted","Data":"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.918915 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerStarted","Data":"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019"} Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.920375 5024 scope.go:117] "RemoveContainer" containerID="2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.922475 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.930594 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-nd8ds"] Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.964636 5024 scope.go:117] "RemoveContainer" containerID="3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.965104 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749\": container with ID starting with 3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749 not found: ID does not exist" containerID="3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.965149 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749"} err="failed to get container status \"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749\": rpc error: code = NotFound desc = could not find container \"3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749\": container with ID starting with 3fa5c2adddb0e72f79b745a61b3c66d71bcbfcbef1f0ae2a1e65ed9310b4b749 not found: ID does not exist" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.965171 5024 scope.go:117] "RemoveContainer" containerID="2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b" Oct 07 13:23:18 crc kubenswrapper[5024]: E1007 13:23:18.965754 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b\": container with ID starting with 2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b not found: ID does not exist" containerID="2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.965780 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b"} err="failed to get container status \"2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b\": rpc error: code = NotFound desc = could not find container \"2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b\": container with ID starting with 2cc5b8aef1af9f77ea971cc1771ad6246abd4a448ef466768b2a51c77ec59c2b not found: ID does not exist" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.965849 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.24308485 podStartE2EDuration="12.965829694s" podCreationTimestamp="2025-10-07 13:23:06 +0000 UTC" firstStartedPulling="2025-10-07 13:23:08.408598551 +0000 UTC m=+3326.484385389" lastFinishedPulling="2025-10-07 13:23:17.131343385 +0000 UTC m=+3335.207130233" observedRunningTime="2025-10-07 13:23:18.956988238 +0000 UTC m=+3337.032775076" watchObservedRunningTime="2025-10-07 13:23:18.965829694 +0000 UTC m=+3337.041616532" Oct 07 13:23:18 crc kubenswrapper[5024]: I1007 13:23:18.967110 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.967103471 podStartE2EDuration="6.967103471s" podCreationTimestamp="2025-10-07 13:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:23:18.937531945 +0000 UTC m=+3337.013318783" watchObservedRunningTime="2025-10-07 13:23:18.967103471 +0000 UTC m=+3337.042890299" Oct 07 13:23:19 crc kubenswrapper[5024]: I1007 13:23:19.113584 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:19 crc kubenswrapper[5024]: I1007 13:23:19.930188 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerStarted","Data":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} Oct 07 13:23:19 crc kubenswrapper[5024]: I1007 13:23:19.930507 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerStarted","Data":"f992f55365785a67ea3f00752978ab92820810c761b8c72fda440c19cf0a935e"} Oct 07 13:23:20 crc kubenswrapper[5024]: I1007 13:23:20.608366 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6db8948-lncr7" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Oct 07 13:23:20 crc kubenswrapper[5024]: I1007 13:23:20.608885 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:23:20 crc kubenswrapper[5024]: I1007 13:23:20.722634 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:20 crc kubenswrapper[5024]: I1007 13:23:20.763852 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" path="/var/lib/kubelet/pods/e59a7592-dced-40e5-abb1-d85862ca5ac7/volumes" Oct 07 13:23:20 crc kubenswrapper[5024]: I1007 13:23:20.943535 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerStarted","Data":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} Oct 07 13:23:21 crc kubenswrapper[5024]: I1007 13:23:21.997086 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerStarted","Data":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.026521 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerStarted","Data":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.027112 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.026812 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="proxy-httpd" containerID="cri-o://26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" gracePeriod=30 Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.026848 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-notification-agent" containerID="cri-o://82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" gracePeriod=30 Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.026895 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="sg-core" containerID="cri-o://7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" gracePeriod=30 Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.026695 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-central-agent" containerID="cri-o://6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" gracePeriod=30 Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.056965 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.07039731 podStartE2EDuration="6.056940492s" podCreationTimestamp="2025-10-07 13:23:18 +0000 UTC" firstStartedPulling="2025-10-07 13:23:19.166344973 +0000 UTC m=+3337.242131811" lastFinishedPulling="2025-10-07 13:23:23.152888155 +0000 UTC m=+3341.228674993" observedRunningTime="2025-10-07 13:23:24.054175462 +0000 UTC m=+3342.129962310" watchObservedRunningTime="2025-10-07 13:23:24.056940492 +0000 UTC m=+3342.132727330" Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.848367 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:24 crc kubenswrapper[5024]: I1007 13:23:24.939200 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012436 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012499 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012622 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012711 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg7zn\" (UniqueName: \"kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012817 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012835 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.012865 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd\") pod \"620b030a-9baf-4c09-853d-c2e14b09df7b\" (UID: \"620b030a-9baf-4c09-853d-c2e14b09df7b\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.013403 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.013585 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.018131 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn" (OuterVolumeSpecName: "kube-api-access-rg7zn") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "kube-api-access-rg7zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.018303 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts" (OuterVolumeSpecName: "scripts") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037616 5024 generic.go:334] "Generic (PLEG): container finished" podID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" exitCode=0 Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037660 5024 generic.go:334] "Generic (PLEG): container finished" podID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" exitCode=2 Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037673 5024 generic.go:334] "Generic (PLEG): container finished" podID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" exitCode=0 Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037685 5024 generic.go:334] "Generic (PLEG): container finished" podID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" exitCode=0 Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037786 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerDied","Data":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037832 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerDied","Data":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037851 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerDied","Data":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerDied","Data":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037875 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"620b030a-9baf-4c09-853d-c2e14b09df7b","Type":"ContainerDied","Data":"f992f55365785a67ea3f00752978ab92820810c761b8c72fda440c19cf0a935e"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.037895 5024 scope.go:117] "RemoveContainer" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.038078 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.042726 5024 generic.go:334] "Generic (PLEG): container finished" podID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerID="0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23" exitCode=137 Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.042862 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.042848 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6db8948-lncr7" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.042871 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerDied","Data":"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.043297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6db8948-lncr7" event={"ID":"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782","Type":"ContainerDied","Data":"969b250f1f79f3be31eb7e30884b1ed390cea55a1d4883b37b2cd455003bd115"} Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.066131 5024 scope.go:117] "RemoveContainer" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.074504 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.092420 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.092520 5024 scope.go:117] "RemoveContainer" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.114900 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5lmx\" (UniqueName: \"kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115044 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115130 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115194 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115238 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115312 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115450 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle\") pod \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\" (UID: \"2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782\") " Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115871 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115888 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115898 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115907 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/620b030a-9baf-4c09-853d-c2e14b09df7b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115917 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115925 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.115933 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg7zn\" (UniqueName: \"kubernetes.io/projected/620b030a-9baf-4c09-853d-c2e14b09df7b-kube-api-access-rg7zn\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.116512 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs" (OuterVolumeSpecName: "logs") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.117978 5024 scope.go:117] "RemoveContainer" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.119152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.119795 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx" (OuterVolumeSpecName: "kube-api-access-d5lmx") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "kube-api-access-d5lmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.137532 5024 scope.go:117] "RemoveContainer" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.138016 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": container with ID starting with 26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea not found: ID does not exist" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.138113 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} err="failed to get container status \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": rpc error: code = NotFound desc = could not find container \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": container with ID starting with 26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.138218 5024 scope.go:117] "RemoveContainer" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.138595 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": container with ID starting with 7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4 not found: ID does not exist" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.138615 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} err="failed to get container status \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": rpc error: code = NotFound desc = could not find container \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": container with ID starting with 7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.138628 5024 scope.go:117] "RemoveContainer" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.139196 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": container with ID starting with 82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054 not found: ID does not exist" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.139279 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} err="failed to get container status \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": rpc error: code = NotFound desc = could not find container \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": container with ID starting with 82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.139354 5024 scope.go:117] "RemoveContainer" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.139997 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": container with ID starting with 6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668 not found: ID does not exist" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140057 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} err="failed to get container status \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": rpc error: code = NotFound desc = could not find container \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": container with ID starting with 6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140095 5024 scope.go:117] "RemoveContainer" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140346 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data" (OuterVolumeSpecName: "config-data") pod "620b030a-9baf-4c09-853d-c2e14b09df7b" (UID: "620b030a-9baf-4c09-853d-c2e14b09df7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140444 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} err="failed to get container status \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": rpc error: code = NotFound desc = could not find container \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": container with ID starting with 26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140469 5024 scope.go:117] "RemoveContainer" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140780 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts" (OuterVolumeSpecName: "scripts") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140800 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} err="failed to get container status \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": rpc error: code = NotFound desc = could not find container \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": container with ID starting with 7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.140845 5024 scope.go:117] "RemoveContainer" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.141260 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data" (OuterVolumeSpecName: "config-data") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.141562 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} err="failed to get container status \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": rpc error: code = NotFound desc = could not find container \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": container with ID starting with 82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.141590 5024 scope.go:117] "RemoveContainer" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.141908 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} err="failed to get container status \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": rpc error: code = NotFound desc = could not find container \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": container with ID starting with 6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.141933 5024 scope.go:117] "RemoveContainer" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.142485 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} err="failed to get container status \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": rpc error: code = NotFound desc = could not find container \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": container with ID starting with 26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.142586 5024 scope.go:117] "RemoveContainer" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.142930 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} err="failed to get container status \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": rpc error: code = NotFound desc = could not find container \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": container with ID starting with 7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143021 5024 scope.go:117] "RemoveContainer" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143356 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} err="failed to get container status \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": rpc error: code = NotFound desc = could not find container \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": container with ID starting with 82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143382 5024 scope.go:117] "RemoveContainer" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143626 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} err="failed to get container status \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": rpc error: code = NotFound desc = could not find container \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": container with ID starting with 6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143649 5024 scope.go:117] "RemoveContainer" containerID="26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143809 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea"} err="failed to get container status \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": rpc error: code = NotFound desc = could not find container \"26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea\": container with ID starting with 26635070795e5178fbd81f5cf7f73fddfdb331bccff4f7176c1ff8fe6e9e96ea not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.143826 5024 scope.go:117] "RemoveContainer" containerID="7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144080 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4"} err="failed to get container status \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": rpc error: code = NotFound desc = could not find container \"7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4\": container with ID starting with 7df9ac47880da67c12b737138234078d27e3e28948b3f379df3d98df6a8308b4 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144098 5024 scope.go:117] "RemoveContainer" containerID="82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144389 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054"} err="failed to get container status \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": rpc error: code = NotFound desc = could not find container \"82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054\": container with ID starting with 82d097732945244eaa066ed06114f45db194efe5ce740fae41a4edba867ce054 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144405 5024 scope.go:117] "RemoveContainer" containerID="6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144713 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668"} err="failed to get container status \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": rpc error: code = NotFound desc = could not find container \"6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668\": container with ID starting with 6e4fbe22984dadd07783a3a88bea76407a5fd257700f3b6fc9682ac832682668 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.144796 5024 scope.go:117] "RemoveContainer" containerID="2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.146565 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.178006 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" (UID: "2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218701 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218764 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5lmx\" (UniqueName: \"kubernetes.io/projected/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-kube-api-access-d5lmx\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218781 5024 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218794 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-logs\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218810 5024 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218825 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218841 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.218853 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620b030a-9baf-4c09-853d-c2e14b09df7b-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.311080 5024 scope.go:117] "RemoveContainer" containerID="0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.352681 5024 scope.go:117] "RemoveContainer" containerID="2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.353407 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1\": container with ID starting with 2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1 not found: ID does not exist" containerID="2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.353451 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1"} err="failed to get container status \"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1\": rpc error: code = NotFound desc = could not find container \"2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1\": container with ID starting with 2c86969ca696b93b979832b0f0340a4cbad8a024ca5f53ab3b7a4d0f14c061c1 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.353478 5024 scope.go:117] "RemoveContainer" containerID="0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.354463 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23\": container with ID starting with 0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23 not found: ID does not exist" containerID="0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.354497 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23"} err="failed to get container status \"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23\": rpc error: code = NotFound desc = could not find container \"0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23\": container with ID starting with 0ddab389f822d2b2f3b0f06d167e3a6f77e974ea8f14527e4507530de0b80b23 not found: ID does not exist" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.374325 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.382286 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.394428 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.405000 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6db8948-lncr7"] Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.411306 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.411904 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-notification-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.411932 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-notification-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.411953 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-central-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.411963 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-central-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.411979 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.411987 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.412009 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="proxy-httpd" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412017 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="proxy-httpd" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.412029 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="init" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412037 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="init" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.412079 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon-log" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412089 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon-log" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.412099 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="sg-core" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412107 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="sg-core" Oct 07 13:23:25 crc kubenswrapper[5024]: E1007 13:23:25.412127 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="dnsmasq-dns" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412187 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="dnsmasq-dns" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412498 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412527 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-central-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412547 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="proxy-httpd" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412570 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" containerName="horizon-log" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412592 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59a7592-dced-40e5-abb1-d85862ca5ac7" containerName="dnsmasq-dns" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412611 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="ceilometer-notification-agent" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.412637 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" containerName="sg-core" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.415048 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.418659 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.459185 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.459238 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.459243 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.525794 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ndl\" (UniqueName: \"kubernetes.io/projected/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-kube-api-access-26ndl\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526180 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-config-data\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526273 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-scripts\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526411 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526514 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-run-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526554 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.526657 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-log-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629463 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26ndl\" (UniqueName: \"kubernetes.io/projected/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-kube-api-access-26ndl\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629706 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-config-data\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629748 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-scripts\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629800 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629872 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-run-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.629961 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-log-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.630955 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-log-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.630970 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-run-httpd\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.635675 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.638397 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.645595 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-config-data\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.646583 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-scripts\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.666671 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.671548 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26ndl\" (UniqueName: \"kubernetes.io/projected/6891ff95-7d91-421e-a087-7cbf5fd9a6e9-kube-api-access-26ndl\") pod \"ceilometer-0\" (UID: \"6891ff95-7d91-421e-a087-7cbf5fd9a6e9\") " pod="openstack/ceilometer-0" Oct 07 13:23:25 crc kubenswrapper[5024]: I1007 13:23:25.770082 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 07 13:23:26 crc kubenswrapper[5024]: I1007 13:23:26.242521 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 07 13:23:26 crc kubenswrapper[5024]: W1007 13:23:26.246454 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6891ff95_7d91_421e_a087_7cbf5fd9a6e9.slice/crio-a75722678aa2f6160918e780933228e8ab1cb38b82750448e43ebfac18d015e4 WatchSource:0}: Error finding container a75722678aa2f6160918e780933228e8ab1cb38b82750448e43ebfac18d015e4: Status 404 returned error can't find the container with id a75722678aa2f6160918e780933228e8ab1cb38b82750448e43ebfac18d015e4 Oct 07 13:23:26 crc kubenswrapper[5024]: I1007 13:23:26.767869 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782" path="/var/lib/kubelet/pods/2d3bd0f4-cea0-41b2-8fb5-e3209dfa3782/volumes" Oct 07 13:23:26 crc kubenswrapper[5024]: I1007 13:23:26.768975 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620b030a-9baf-4c09-853d-c2e14b09df7b" path="/var/lib/kubelet/pods/620b030a-9baf-4c09-853d-c2e14b09df7b/volumes" Oct 07 13:23:27 crc kubenswrapper[5024]: I1007 13:23:27.070840 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6891ff95-7d91-421e-a087-7cbf5fd9a6e9","Type":"ContainerStarted","Data":"a75722678aa2f6160918e780933228e8ab1cb38b82750448e43ebfac18d015e4"} Oct 07 13:23:27 crc kubenswrapper[5024]: I1007 13:23:27.626325 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 07 13:23:29 crc kubenswrapper[5024]: I1007 13:23:29.538382 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 07 13:23:29 crc kubenswrapper[5024]: I1007 13:23:29.597297 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:30 crc kubenswrapper[5024]: I1007 13:23:30.151795 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6891ff95-7d91-421e-a087-7cbf5fd9a6e9","Type":"ContainerStarted","Data":"cea4a5d31daf1dab778347b2529b214ae41565a41026c722cb218b2f963293c4"} Oct 07 13:23:30 crc kubenswrapper[5024]: I1007 13:23:30.152005 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="manila-scheduler" containerID="cri-o://2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746" gracePeriod=30 Oct 07 13:23:30 crc kubenswrapper[5024]: I1007 13:23:30.152076 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="probe" containerID="cri-o://b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012" gracePeriod=30 Oct 07 13:23:31 crc kubenswrapper[5024]: I1007 13:23:31.165177 5024 generic.go:334] "Generic (PLEG): container finished" podID="3a8469e3-c933-44aa-8d01-325211458238" containerID="b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012" exitCode=0 Oct 07 13:23:31 crc kubenswrapper[5024]: I1007 13:23:31.165263 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerDied","Data":"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012"} Oct 07 13:23:31 crc kubenswrapper[5024]: I1007 13:23:31.168448 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6891ff95-7d91-421e-a087-7cbf5fd9a6e9","Type":"ContainerStarted","Data":"275a64b675e747eb42b0bb189cccaf1303e93fcdd86f477553d2d5387e31d0c0"} Oct 07 13:23:31 crc kubenswrapper[5024]: I1007 13:23:31.168479 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6891ff95-7d91-421e-a087-7cbf5fd9a6e9","Type":"ContainerStarted","Data":"7c70fb74c593d17299c2cac125085a2eeefe3c057ac8cae184f3a508629e6fc1"} Oct 07 13:23:33 crc kubenswrapper[5024]: I1007 13:23:33.195397 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6891ff95-7d91-421e-a087-7cbf5fd9a6e9","Type":"ContainerStarted","Data":"f08dc3d6b93f2c492d3babbad31ddc265c16e93c7ff54368306e40f32d611035"} Oct 07 13:23:33 crc kubenswrapper[5024]: I1007 13:23:33.196036 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 07 13:23:33 crc kubenswrapper[5024]: I1007 13:23:33.224806 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.187323561 podStartE2EDuration="8.224779069s" podCreationTimestamp="2025-10-07 13:23:25 +0000 UTC" firstStartedPulling="2025-10-07 13:23:26.25170677 +0000 UTC m=+3344.327493618" lastFinishedPulling="2025-10-07 13:23:32.289162278 +0000 UTC m=+3350.364949126" observedRunningTime="2025-10-07 13:23:33.217640882 +0000 UTC m=+3351.293427740" watchObservedRunningTime="2025-10-07 13:23:33.224779069 +0000 UTC m=+3351.300565907" Oct 07 13:23:33 crc kubenswrapper[5024]: I1007 13:23:33.752418 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:23:33 crc kubenswrapper[5024]: E1007 13:23:33.753251 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.778220 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883238 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883304 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883509 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86696\" (UniqueName: \"kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883568 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883643 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.883694 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom\") pod \"3a8469e3-c933-44aa-8d01-325211458238\" (UID: \"3a8469e3-c933-44aa-8d01-325211458238\") " Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.885461 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.891984 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.893989 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.894050 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696" (OuterVolumeSpecName: "kube-api-access-86696") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "kube-api-access-86696". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.898538 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts" (OuterVolumeSpecName: "scripts") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.986475 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86696\" (UniqueName: \"kubernetes.io/projected/3a8469e3-c933-44aa-8d01-325211458238-kube-api-access-86696\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.986521 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.986535 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a8469e3-c933-44aa-8d01-325211458238-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.986546 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:34 crc kubenswrapper[5024]: I1007 13:23:34.992921 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.055355 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data" (OuterVolumeSpecName: "config-data") pod "3a8469e3-c933-44aa-8d01-325211458238" (UID: "3a8469e3-c933-44aa-8d01-325211458238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.088459 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.088609 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8469e3-c933-44aa-8d01-325211458238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.215353 5024 generic.go:334] "Generic (PLEG): container finished" podID="3a8469e3-c933-44aa-8d01-325211458238" containerID="2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746" exitCode=0 Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.215408 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerDied","Data":"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746"} Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.215441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3a8469e3-c933-44aa-8d01-325211458238","Type":"ContainerDied","Data":"2156e1b9f8e52604cc24c88232cd1d475518846058d3249f6a3750820f6fe87e"} Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.215460 5024 scope.go:117] "RemoveContainer" containerID="b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.215627 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.250506 5024 scope.go:117] "RemoveContainer" containerID="2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.256717 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.273432 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.286855 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:35 crc kubenswrapper[5024]: E1007 13:23:35.287664 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="probe" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.287753 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="probe" Oct 07 13:23:35 crc kubenswrapper[5024]: E1007 13:23:35.287849 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="manila-scheduler" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.287903 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="manila-scheduler" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.288169 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="probe" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.288232 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8469e3-c933-44aa-8d01-325211458238" containerName="manila-scheduler" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.289479 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.294846 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.299414 5024 scope.go:117] "RemoveContainer" containerID="b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012" Oct 07 13:23:35 crc kubenswrapper[5024]: E1007 13:23:35.303732 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012\": container with ID starting with b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012 not found: ID does not exist" containerID="b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.303801 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012"} err="failed to get container status \"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012\": rpc error: code = NotFound desc = could not find container \"b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012\": container with ID starting with b2346807f6119cbb6106e02eda2745c0e118b4bb6b2b62ef94ee1d2e11da1012 not found: ID does not exist" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.303846 5024 scope.go:117] "RemoveContainer" containerID="2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746" Oct 07 13:23:35 crc kubenswrapper[5024]: E1007 13:23:35.304404 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746\": container with ID starting with 2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746 not found: ID does not exist" containerID="2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.304430 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746"} err="failed to get container status \"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746\": rpc error: code = NotFound desc = could not find container \"2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746\": container with ID starting with 2c1043621d39e45d25eea9a9e7bd010c9dedf49370a612cf5fb2372d8a6e8746 not found: ID does not exist" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.321384 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394082 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394162 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbc7n\" (UniqueName: \"kubernetes.io/projected/754c6c13-696f-4a69-ad50-ba23eb523d41-kube-api-access-rbc7n\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394227 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394299 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/754c6c13-696f-4a69-ad50-ba23eb523d41-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394337 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-scripts\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.394668 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.496958 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497025 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbc7n\" (UniqueName: \"kubernetes.io/projected/754c6c13-696f-4a69-ad50-ba23eb523d41-kube-api-access-rbc7n\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497362 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/754c6c13-696f-4a69-ad50-ba23eb523d41-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-scripts\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497502 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.497625 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/754c6c13-696f-4a69-ad50-ba23eb523d41-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.502264 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.502355 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.502487 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.503250 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/754c6c13-696f-4a69-ad50-ba23eb523d41-scripts\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.530108 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbc7n\" (UniqueName: \"kubernetes.io/projected/754c6c13-696f-4a69-ad50-ba23eb523d41-kube-api-access-rbc7n\") pod \"manila-scheduler-0\" (UID: \"754c6c13-696f-4a69-ad50-ba23eb523d41\") " pod="openstack/manila-scheduler-0" Oct 07 13:23:35 crc kubenswrapper[5024]: I1007 13:23:35.613581 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 07 13:23:36 crc kubenswrapper[5024]: W1007 13:23:36.144303 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754c6c13_696f_4a69_ad50_ba23eb523d41.slice/crio-7e94ff33b4f85391657a904118d8bcfa3f032828809385195ca1479b2eba0a42 WatchSource:0}: Error finding container 7e94ff33b4f85391657a904118d8bcfa3f032828809385195ca1479b2eba0a42: Status 404 returned error can't find the container with id 7e94ff33b4f85391657a904118d8bcfa3f032828809385195ca1479b2eba0a42 Oct 07 13:23:36 crc kubenswrapper[5024]: I1007 13:23:36.146501 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 07 13:23:36 crc kubenswrapper[5024]: I1007 13:23:36.228688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"754c6c13-696f-4a69-ad50-ba23eb523d41","Type":"ContainerStarted","Data":"7e94ff33b4f85391657a904118d8bcfa3f032828809385195ca1479b2eba0a42"} Oct 07 13:23:36 crc kubenswrapper[5024]: I1007 13:23:36.777607 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8469e3-c933-44aa-8d01-325211458238" path="/var/lib/kubelet/pods/3a8469e3-c933-44aa-8d01-325211458238/volumes" Oct 07 13:23:37 crc kubenswrapper[5024]: I1007 13:23:37.249108 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"754c6c13-696f-4a69-ad50-ba23eb523d41","Type":"ContainerStarted","Data":"c3a79c93465a8ca23dcb769a4e94b0772578cec7e9173fc2a60b6838c470953f"} Oct 07 13:23:38 crc kubenswrapper[5024]: I1007 13:23:38.262468 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"754c6c13-696f-4a69-ad50-ba23eb523d41","Type":"ContainerStarted","Data":"05dd8c679e716b6f42c73d272946291164a29672bd853f2e6955d4b5015a76d1"} Oct 07 13:23:38 crc kubenswrapper[5024]: I1007 13:23:38.287515 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.287489535 podStartE2EDuration="3.287489535s" podCreationTimestamp="2025-10-07 13:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:23:38.286532777 +0000 UTC m=+3356.362319625" watchObservedRunningTime="2025-10-07 13:23:38.287489535 +0000 UTC m=+3356.363276373" Oct 07 13:23:39 crc kubenswrapper[5024]: I1007 13:23:39.355269 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 07 13:23:39 crc kubenswrapper[5024]: I1007 13:23:39.436239 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:40 crc kubenswrapper[5024]: I1007 13:23:40.283923 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="manila-share" containerID="cri-o://3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" gracePeriod=30 Oct 07 13:23:40 crc kubenswrapper[5024]: I1007 13:23:40.283984 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="probe" containerID="cri-o://1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" gracePeriod=30 Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.295836 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.310753 5024 generic.go:334] "Generic (PLEG): container finished" podID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerID="1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" exitCode=0 Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.310826 5024 generic.go:334] "Generic (PLEG): container finished" podID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerID="3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" exitCode=1 Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.310891 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerDied","Data":"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec"} Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.310946 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerDied","Data":"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019"} Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.310974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1","Type":"ContainerDied","Data":"d7046281d72d557b3b2b0d48844c01de7b60e0029b9eb8716a48ca4c32857d7c"} Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.311011 5024 scope.go:117] "RemoveContainer" containerID="1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.311503 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.343236 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.343586 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4w9r\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.343788 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.343990 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.344120 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.344359 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.344468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.344568 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data\") pod \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\" (UID: \"dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1\") " Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.349415 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.349979 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.354328 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts" (OuterVolumeSpecName: "scripts") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.354563 5024 scope.go:117] "RemoveContainer" containerID="3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.354352 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph" (OuterVolumeSpecName: "ceph") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.354422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r" (OuterVolumeSpecName: "kube-api-access-c4w9r") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "kube-api-access-c4w9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.362277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.429539 5024 scope.go:117] "RemoveContainer" containerID="1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" Oct 07 13:23:41 crc kubenswrapper[5024]: E1007 13:23:41.430664 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec\": container with ID starting with 1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec not found: ID does not exist" containerID="1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.430703 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec"} err="failed to get container status \"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec\": rpc error: code = NotFound desc = could not find container \"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec\": container with ID starting with 1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec not found: ID does not exist" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.430732 5024 scope.go:117] "RemoveContainer" containerID="3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" Oct 07 13:23:41 crc kubenswrapper[5024]: E1007 13:23:41.431009 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019\": container with ID starting with 3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019 not found: ID does not exist" containerID="3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.431042 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019"} err="failed to get container status \"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019\": rpc error: code = NotFound desc = could not find container \"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019\": container with ID starting with 3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019 not found: ID does not exist" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.431061 5024 scope.go:117] "RemoveContainer" containerID="1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.431380 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec"} err="failed to get container status \"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec\": rpc error: code = NotFound desc = could not find container \"1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec\": container with ID starting with 1838512b44ffc0120818f8c53dc2d81d9fc5495683da0ba277c3d46fded108ec not found: ID does not exist" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.431414 5024 scope.go:117] "RemoveContainer" containerID="3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.431826 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019"} err="failed to get container status \"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019\": rpc error: code = NotFound desc = could not find container \"3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019\": container with ID starting with 3de8bd5bc2840619f3284a9a95a825c05f5684ed095048beb8b4ab0461607019 not found: ID does not exist" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.437491 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447045 5024 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-ceph\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447083 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447096 5024 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-var-lib-manila\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447104 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-scripts\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447113 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4w9r\" (UniqueName: \"kubernetes.io/projected/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-kube-api-access-c4w9r\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447123 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.447130 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.482458 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data" (OuterVolumeSpecName: "config-data") pod "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" (UID: "dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.548997 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.647516 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.664496 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.697773 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:41 crc kubenswrapper[5024]: E1007 13:23:41.698609 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="probe" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.698643 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="probe" Oct 07 13:23:41 crc kubenswrapper[5024]: E1007 13:23:41.698721 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="manila-share" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.698739 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="manila-share" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.699206 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="manila-share" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.699250 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" containerName="probe" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.701076 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.709943 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.710429 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756262 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756322 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756345 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7g67\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-kube-api-access-g7g67\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756381 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756558 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-ceph\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-scripts\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756662 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.756841 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858552 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-ceph\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858682 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-scripts\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858704 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858775 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.858971 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.859103 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.859131 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7g67\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-kube-api-access-g7g67\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.859675 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b21d961-b63f-40b0-b1f6-54562b4edcdb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.863490 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.864307 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-config-data\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.870866 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.882476 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-ceph\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.882569 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7g67\" (UniqueName: \"kubernetes.io/projected/1b21d961-b63f-40b0-b1f6-54562b4edcdb-kube-api-access-g7g67\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:41 crc kubenswrapper[5024]: I1007 13:23:41.892765 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b21d961-b63f-40b0-b1f6-54562b4edcdb-scripts\") pod \"manila-share-share1-0\" (UID: \"1b21d961-b63f-40b0-b1f6-54562b4edcdb\") " pod="openstack/manila-share-share1-0" Oct 07 13:23:42 crc kubenswrapper[5024]: I1007 13:23:42.031283 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 07 13:23:42 crc kubenswrapper[5024]: I1007 13:23:42.682515 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 07 13:23:42 crc kubenswrapper[5024]: I1007 13:23:42.786390 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1" path="/var/lib/kubelet/pods/dc5cd2d0-f5b1-4c85-931d-0df8bd1a3ff1/volumes" Oct 07 13:23:43 crc kubenswrapper[5024]: I1007 13:23:43.340586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1b21d961-b63f-40b0-b1f6-54562b4edcdb","Type":"ContainerStarted","Data":"b35cbb0eb313bb0d56f91918fbdc4707ed04c5da98420b3c42478ecb0fd528ab"} Oct 07 13:23:44 crc kubenswrapper[5024]: I1007 13:23:44.398337 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1b21d961-b63f-40b0-b1f6-54562b4edcdb","Type":"ContainerStarted","Data":"a476c087a2875a16171f4d52f622ecc96b6f686294433bb326fabc10e7b351e8"} Oct 07 13:23:44 crc kubenswrapper[5024]: I1007 13:23:44.398796 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1b21d961-b63f-40b0-b1f6-54562b4edcdb","Type":"ContainerStarted","Data":"9386e2f95173259c4d165b6f615b23af77cfb9a03639ca190df1a5fced78c222"} Oct 07 13:23:44 crc kubenswrapper[5024]: I1007 13:23:44.438683 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.438662654 podStartE2EDuration="3.438662654s" podCreationTimestamp="2025-10-07 13:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:23:44.43472761 +0000 UTC m=+3362.510514468" watchObservedRunningTime="2025-10-07 13:23:44.438662654 +0000 UTC m=+3362.514449512" Oct 07 13:23:45 crc kubenswrapper[5024]: I1007 13:23:45.613702 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 07 13:23:46 crc kubenswrapper[5024]: I1007 13:23:46.752863 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:23:47 crc kubenswrapper[5024]: I1007 13:23:47.434879 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751"} Oct 07 13:23:52 crc kubenswrapper[5024]: I1007 13:23:52.031813 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 07 13:23:55 crc kubenswrapper[5024]: I1007 13:23:55.778284 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 07 13:23:57 crc kubenswrapper[5024]: I1007 13:23:57.354536 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 07 13:24:03 crc kubenswrapper[5024]: I1007 13:24:03.987775 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.171352 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.176413 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.189823 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.284176 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.284401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.284586 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdq2x\" (UniqueName: \"kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.387404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.387867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.387923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdq2x\" (UniqueName: \"kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.387976 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.388422 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.407757 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdq2x\" (UniqueName: \"kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x\") pod \"certified-operators-9w2nx\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:30 crc kubenswrapper[5024]: I1007 13:24:30.516741 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:31 crc kubenswrapper[5024]: I1007 13:24:31.109675 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:31 crc kubenswrapper[5024]: I1007 13:24:31.984829 5024 generic.go:334] "Generic (PLEG): container finished" podID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerID="9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0" exitCode=0 Oct 07 13:24:31 crc kubenswrapper[5024]: I1007 13:24:31.985171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerDied","Data":"9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0"} Oct 07 13:24:31 crc kubenswrapper[5024]: I1007 13:24:31.985312 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerStarted","Data":"f158ed70f7aab92a53b790ee59cef6b2ac591040705fc1bd7c06091f77381a24"} Oct 07 13:24:34 crc kubenswrapper[5024]: I1007 13:24:34.009647 5024 generic.go:334] "Generic (PLEG): container finished" podID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerID="2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0" exitCode=0 Oct 07 13:24:34 crc kubenswrapper[5024]: I1007 13:24:34.009702 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerDied","Data":"2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0"} Oct 07 13:24:35 crc kubenswrapper[5024]: I1007 13:24:35.027391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerStarted","Data":"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d"} Oct 07 13:24:35 crc kubenswrapper[5024]: I1007 13:24:35.062958 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9w2nx" podStartSLOduration=2.567338398 podStartE2EDuration="5.062936838s" podCreationTimestamp="2025-10-07 13:24:30 +0000 UTC" firstStartedPulling="2025-10-07 13:24:31.986611703 +0000 UTC m=+3410.062398551" lastFinishedPulling="2025-10-07 13:24:34.482210153 +0000 UTC m=+3412.557996991" observedRunningTime="2025-10-07 13:24:35.061992601 +0000 UTC m=+3413.137779449" watchObservedRunningTime="2025-10-07 13:24:35.062936838 +0000 UTC m=+3413.138723666" Oct 07 13:24:40 crc kubenswrapper[5024]: I1007 13:24:40.517550 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:40 crc kubenswrapper[5024]: I1007 13:24:40.520379 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:40 crc kubenswrapper[5024]: I1007 13:24:40.611447 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:41 crc kubenswrapper[5024]: I1007 13:24:41.191050 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:41 crc kubenswrapper[5024]: I1007 13:24:41.249786 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.126763 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9w2nx" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="registry-server" containerID="cri-o://8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d" gracePeriod=2 Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.683389 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.726607 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content\") pod \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.726673 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities\") pod \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.726710 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdq2x\" (UniqueName: \"kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x\") pod \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\" (UID: \"e56a9600-52f6-485e-bb80-159f6ee9e5ef\") " Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.727780 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities" (OuterVolumeSpecName: "utilities") pod "e56a9600-52f6-485e-bb80-159f6ee9e5ef" (UID: "e56a9600-52f6-485e-bb80-159f6ee9e5ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.740511 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x" (OuterVolumeSpecName: "kube-api-access-hdq2x") pod "e56a9600-52f6-485e-bb80-159f6ee9e5ef" (UID: "e56a9600-52f6-485e-bb80-159f6ee9e5ef"). InnerVolumeSpecName "kube-api-access-hdq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.786475 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56a9600-52f6-485e-bb80-159f6ee9e5ef" (UID: "e56a9600-52f6-485e-bb80-159f6ee9e5ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.830303 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.830351 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a9600-52f6-485e-bb80-159f6ee9e5ef-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:24:43 crc kubenswrapper[5024]: I1007 13:24:43.830364 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdq2x\" (UniqueName: \"kubernetes.io/projected/e56a9600-52f6-485e-bb80-159f6ee9e5ef-kube-api-access-hdq2x\") on node \"crc\" DevicePath \"\"" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.140014 5024 generic.go:334] "Generic (PLEG): container finished" podID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerID="8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d" exitCode=0 Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.140071 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w2nx" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.140078 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerDied","Data":"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d"} Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.140171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w2nx" event={"ID":"e56a9600-52f6-485e-bb80-159f6ee9e5ef","Type":"ContainerDied","Data":"f158ed70f7aab92a53b790ee59cef6b2ac591040705fc1bd7c06091f77381a24"} Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.140198 5024 scope.go:117] "RemoveContainer" containerID="8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.174886 5024 scope.go:117] "RemoveContainer" containerID="2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.179132 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.188942 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9w2nx"] Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.206669 5024 scope.go:117] "RemoveContainer" containerID="9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.252070 5024 scope.go:117] "RemoveContainer" containerID="8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d" Oct 07 13:24:44 crc kubenswrapper[5024]: E1007 13:24:44.252757 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d\": container with ID starting with 8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d not found: ID does not exist" containerID="8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.252812 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d"} err="failed to get container status \"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d\": rpc error: code = NotFound desc = could not find container \"8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d\": container with ID starting with 8ac790e3fa409149f7656f8d07fd3e2f8ef0f61499f51ef28309623e5622da1d not found: ID does not exist" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.252853 5024 scope.go:117] "RemoveContainer" containerID="2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0" Oct 07 13:24:44 crc kubenswrapper[5024]: E1007 13:24:44.253502 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0\": container with ID starting with 2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0 not found: ID does not exist" containerID="2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.253583 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0"} err="failed to get container status \"2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0\": rpc error: code = NotFound desc = could not find container \"2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0\": container with ID starting with 2614b1fc854643426fe79c900b6369933b620f3e40ea78a387a43f0aa5ce25c0 not found: ID does not exist" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.253635 5024 scope.go:117] "RemoveContainer" containerID="9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0" Oct 07 13:24:44 crc kubenswrapper[5024]: E1007 13:24:44.254132 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0\": container with ID starting with 9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0 not found: ID does not exist" containerID="9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.254266 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0"} err="failed to get container status \"9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0\": rpc error: code = NotFound desc = could not find container \"9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0\": container with ID starting with 9a9c27748811ce92fa553c599c38356ab1f32352dc51a55e283499ae6f02c4a0 not found: ID does not exist" Oct 07 13:24:44 crc kubenswrapper[5024]: I1007 13:24:44.766458 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" path="/var/lib/kubelet/pods/e56a9600-52f6-485e-bb80-159f6ee9e5ef/volumes" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.478312 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 07 13:25:13 crc kubenswrapper[5024]: E1007 13:25:13.480104 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="extract-utilities" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.480175 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="extract-utilities" Oct 07 13:25:13 crc kubenswrapper[5024]: E1007 13:25:13.480247 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="extract-content" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.480272 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="extract-content" Oct 07 13:25:13 crc kubenswrapper[5024]: E1007 13:25:13.480292 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="registry-server" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.480309 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="registry-server" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.480853 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56a9600-52f6-485e-bb80-159f6ee9e5ef" containerName="registry-server" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.482644 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.485934 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.486693 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-65b5w" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.490405 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.492356 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.500524 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.634400 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.634958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.635509 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.635617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.635926 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.636450 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.636789 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.636898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.637187 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vmd\" (UniqueName: \"kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.740113 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.740868 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.741096 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.741423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.741481 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.741742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.742030 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.742243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.742323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.742594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vmd\" (UniqueName: \"kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.742818 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.743462 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.743677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.744359 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.752288 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.753519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.755337 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.780492 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vmd\" (UniqueName: \"kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.794336 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " pod="openstack/tempest-tests-tempest" Oct 07 13:25:13 crc kubenswrapper[5024]: I1007 13:25:13.827609 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 07 13:25:14 crc kubenswrapper[5024]: I1007 13:25:14.331496 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 07 13:25:14 crc kubenswrapper[5024]: I1007 13:25:14.472112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c269d1e5-beee-4868-949d-eb84e0d44521","Type":"ContainerStarted","Data":"6b0aa77d3e52b6dc124cd6f75a2fca79fc2c06a993b3c8affb3557d0d6d033e5"} Oct 07 13:25:47 crc kubenswrapper[5024]: E1007 13:25:47.629671 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 07 13:25:47 crc kubenswrapper[5024]: E1007 13:25:47.631395 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7vmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(c269d1e5-beee-4868-949d-eb84e0d44521): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 07 13:25:47 crc kubenswrapper[5024]: E1007 13:25:47.632672 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="c269d1e5-beee-4868-949d-eb84e0d44521" Oct 07 13:25:47 crc kubenswrapper[5024]: E1007 13:25:47.856758 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="c269d1e5-beee-4868-949d-eb84e0d44521" Oct 07 13:26:02 crc kubenswrapper[5024]: I1007 13:26:02.337474 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 07 13:26:04 crc kubenswrapper[5024]: I1007 13:26:04.067977 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c269d1e5-beee-4868-949d-eb84e0d44521","Type":"ContainerStarted","Data":"225b62f8c4deedb403267f5dc83a294ca8061adede0f195967692088714e2436"} Oct 07 13:26:04 crc kubenswrapper[5024]: I1007 13:26:04.096728 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.110604661 podStartE2EDuration="52.096709157s" podCreationTimestamp="2025-10-07 13:25:12 +0000 UTC" firstStartedPulling="2025-10-07 13:25:14.347124569 +0000 UTC m=+3452.422911427" lastFinishedPulling="2025-10-07 13:26:02.333229055 +0000 UTC m=+3500.409015923" observedRunningTime="2025-10-07 13:26:04.094954687 +0000 UTC m=+3502.170741545" watchObservedRunningTime="2025-10-07 13:26:04.096709157 +0000 UTC m=+3502.172495995" Oct 07 13:26:13 crc kubenswrapper[5024]: I1007 13:26:13.720623 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:26:13 crc kubenswrapper[5024]: I1007 13:26:13.721394 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:26:43 crc kubenswrapper[5024]: I1007 13:26:43.720947 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:26:43 crc kubenswrapper[5024]: I1007 13:26:43.721820 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.066232 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.069722 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.077114 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.189688 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.189900 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.190329 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85nn7\" (UniqueName: \"kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.292711 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85nn7\" (UniqueName: \"kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.292885 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.292944 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.293491 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.293602 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.316012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85nn7\" (UniqueName: \"kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7\") pod \"redhat-marketplace-fzjx8\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.404664 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.877125 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:09 crc kubenswrapper[5024]: I1007 13:27:09.904399 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerStarted","Data":"517893b23d8e7eee0258563fd49ebccff723a6ae0cb338b155f2c6e49ce1bace"} Oct 07 13:27:10 crc kubenswrapper[5024]: I1007 13:27:10.922042 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerID="eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac" exitCode=0 Oct 07 13:27:10 crc kubenswrapper[5024]: I1007 13:27:10.922684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerDied","Data":"eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac"} Oct 07 13:27:11 crc kubenswrapper[5024]: I1007 13:27:11.941290 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerStarted","Data":"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f"} Oct 07 13:27:12 crc kubenswrapper[5024]: I1007 13:27:12.953382 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerID="667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f" exitCode=0 Oct 07 13:27:12 crc kubenswrapper[5024]: I1007 13:27:12.953439 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerDied","Data":"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f"} Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.720971 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.721528 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.721636 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.723330 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.723445 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751" gracePeriod=600 Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.975453 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerStarted","Data":"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec"} Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.980730 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751" exitCode=0 Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.980776 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751"} Oct 07 13:27:13 crc kubenswrapper[5024]: I1007 13:27:13.980815 5024 scope.go:117] "RemoveContainer" containerID="7eb153319643472d19a6bf09958339d91a214645b5dda1970a8ae756bee849cd" Oct 07 13:27:14 crc kubenswrapper[5024]: I1007 13:27:14.010560 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzjx8" podStartSLOduration=2.424879964 podStartE2EDuration="5.010537804s" podCreationTimestamp="2025-10-07 13:27:09 +0000 UTC" firstStartedPulling="2025-10-07 13:27:10.931710649 +0000 UTC m=+3569.007497527" lastFinishedPulling="2025-10-07 13:27:13.517368519 +0000 UTC m=+3571.593155367" observedRunningTime="2025-10-07 13:27:14.002059828 +0000 UTC m=+3572.077846666" watchObservedRunningTime="2025-10-07 13:27:14.010537804 +0000 UTC m=+3572.086324642" Oct 07 13:27:15 crc kubenswrapper[5024]: I1007 13:27:15.001560 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95"} Oct 07 13:27:19 crc kubenswrapper[5024]: I1007 13:27:19.405349 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:19 crc kubenswrapper[5024]: I1007 13:27:19.406040 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:19 crc kubenswrapper[5024]: I1007 13:27:19.503765 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:20 crc kubenswrapper[5024]: I1007 13:27:20.161471 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:20 crc kubenswrapper[5024]: I1007 13:27:20.245696 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.104691 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fzjx8" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="registry-server" containerID="cri-o://15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec" gracePeriod=2 Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.614037 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.717413 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85nn7\" (UniqueName: \"kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7\") pod \"e5bea8ae-57c0-4362-a612-e7debe5852b1\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.717919 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content\") pod \"e5bea8ae-57c0-4362-a612-e7debe5852b1\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.718077 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities\") pod \"e5bea8ae-57c0-4362-a612-e7debe5852b1\" (UID: \"e5bea8ae-57c0-4362-a612-e7debe5852b1\") " Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.719755 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities" (OuterVolumeSpecName: "utilities") pod "e5bea8ae-57c0-4362-a612-e7debe5852b1" (UID: "e5bea8ae-57c0-4362-a612-e7debe5852b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.724378 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7" (OuterVolumeSpecName: "kube-api-access-85nn7") pod "e5bea8ae-57c0-4362-a612-e7debe5852b1" (UID: "e5bea8ae-57c0-4362-a612-e7debe5852b1"). InnerVolumeSpecName "kube-api-access-85nn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.733595 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5bea8ae-57c0-4362-a612-e7debe5852b1" (UID: "e5bea8ae-57c0-4362-a612-e7debe5852b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.820843 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.820882 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85nn7\" (UniqueName: \"kubernetes.io/projected/e5bea8ae-57c0-4362-a612-e7debe5852b1-kube-api-access-85nn7\") on node \"crc\" DevicePath \"\"" Oct 07 13:27:22 crc kubenswrapper[5024]: I1007 13:27:22.820891 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bea8ae-57c0-4362-a612-e7debe5852b1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.120754 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerID="15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec" exitCode=0 Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.120826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerDied","Data":"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec"} Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.120883 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzjx8" event={"ID":"e5bea8ae-57c0-4362-a612-e7debe5852b1","Type":"ContainerDied","Data":"517893b23d8e7eee0258563fd49ebccff723a6ae0cb338b155f2c6e49ce1bace"} Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.120895 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzjx8" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.120915 5024 scope.go:117] "RemoveContainer" containerID="15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.161481 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.161545 5024 scope.go:117] "RemoveContainer" containerID="667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.187386 5024 scope.go:117] "RemoveContainer" containerID="eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.189766 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzjx8"] Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.258257 5024 scope.go:117] "RemoveContainer" containerID="15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec" Oct 07 13:27:23 crc kubenswrapper[5024]: E1007 13:27:23.259070 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec\": container with ID starting with 15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec not found: ID does not exist" containerID="15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.259312 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec"} err="failed to get container status \"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec\": rpc error: code = NotFound desc = could not find container \"15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec\": container with ID starting with 15b337498f96938f17f3a48c4e979953d27c4b5583828f549a5f436155c387ec not found: ID does not exist" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.259379 5024 scope.go:117] "RemoveContainer" containerID="667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f" Oct 07 13:27:23 crc kubenswrapper[5024]: E1007 13:27:23.260228 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f\": container with ID starting with 667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f not found: ID does not exist" containerID="667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.260290 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f"} err="failed to get container status \"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f\": rpc error: code = NotFound desc = could not find container \"667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f\": container with ID starting with 667ecb363a29384926df8f944a0125e7ba4951564105704c8d7ffc965e55ec4f not found: ID does not exist" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.260357 5024 scope.go:117] "RemoveContainer" containerID="eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac" Oct 07 13:27:23 crc kubenswrapper[5024]: E1007 13:27:23.260829 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac\": container with ID starting with eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac not found: ID does not exist" containerID="eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac" Oct 07 13:27:23 crc kubenswrapper[5024]: I1007 13:27:23.260885 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac"} err="failed to get container status \"eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac\": rpc error: code = NotFound desc = could not find container \"eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac\": container with ID starting with eadf05585972d10b8574915d4ad6d1f9d3335e405dbcd30f4f9d1df395ca0eac not found: ID does not exist" Oct 07 13:27:24 crc kubenswrapper[5024]: I1007 13:27:24.768423 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" path="/var/lib/kubelet/pods/e5bea8ae-57c0-4362-a612-e7debe5852b1/volumes" Oct 07 13:28:49 crc kubenswrapper[5024]: I1007 13:28:49.196480 5024 scope.go:117] "RemoveContainer" containerID="2e4286714c1ca8b284240c76ec8ed3ade152623a7d0204e1685244aec5fb0d60" Oct 07 13:28:49 crc kubenswrapper[5024]: I1007 13:28:49.252043 5024 scope.go:117] "RemoveContainer" containerID="8e05b5fff30621b65569054299e5fac8a88c18835fdb5441c3503eb5b55a8477" Oct 07 13:29:43 crc kubenswrapper[5024]: I1007 13:29:43.720198 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:29:43 crc kubenswrapper[5024]: I1007 13:29:43.721004 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:29:49 crc kubenswrapper[5024]: I1007 13:29:49.325829 5024 scope.go:117] "RemoveContainer" containerID="cda5464159d98988333e73e6053158c768f8efee60c3913f6f90b0aaedce8f4f" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.185981 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h"] Oct 07 13:30:00 crc kubenswrapper[5024]: E1007 13:30:00.193928 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="extract-utilities" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.193957 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="extract-utilities" Oct 07 13:30:00 crc kubenswrapper[5024]: E1007 13:30:00.193969 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="registry-server" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.193977 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="registry-server" Oct 07 13:30:00 crc kubenswrapper[5024]: E1007 13:30:00.193996 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="extract-content" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.194005 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="extract-content" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.194281 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5bea8ae-57c0-4362-a612-e7debe5852b1" containerName="registry-server" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.196763 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.203881 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.204232 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.241106 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h"] Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.299728 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6h9r\" (UniqueName: \"kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.299914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.300078 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.403235 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6h9r\" (UniqueName: \"kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.403694 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.403876 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.404989 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.414600 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.423443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6h9r\" (UniqueName: \"kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r\") pod \"collect-profiles-29330730-d6x5h\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:00 crc kubenswrapper[5024]: I1007 13:30:00.541014 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:01 crc kubenswrapper[5024]: I1007 13:30:01.065810 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h"] Oct 07 13:30:02 crc kubenswrapper[5024]: I1007 13:30:02.068894 5024 generic.go:334] "Generic (PLEG): container finished" podID="e5be4690-d775-4d30-91ae-361d80fcfe02" containerID="f8f17ee867a462443465f630172bb214180b10f516801479e42b7b5da2e3d496" exitCode=0 Oct 07 13:30:02 crc kubenswrapper[5024]: I1007 13:30:02.068959 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" event={"ID":"e5be4690-d775-4d30-91ae-361d80fcfe02","Type":"ContainerDied","Data":"f8f17ee867a462443465f630172bb214180b10f516801479e42b7b5da2e3d496"} Oct 07 13:30:02 crc kubenswrapper[5024]: I1007 13:30:02.069335 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" event={"ID":"e5be4690-d775-4d30-91ae-361d80fcfe02","Type":"ContainerStarted","Data":"7939c92a34d556f941212c0765dda634064cec3510f9e8279a522088f6789583"} Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.523342 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.698972 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6h9r\" (UniqueName: \"kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r\") pod \"e5be4690-d775-4d30-91ae-361d80fcfe02\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.699050 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume\") pod \"e5be4690-d775-4d30-91ae-361d80fcfe02\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.699328 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume\") pod \"e5be4690-d775-4d30-91ae-361d80fcfe02\" (UID: \"e5be4690-d775-4d30-91ae-361d80fcfe02\") " Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.700254 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5be4690-d775-4d30-91ae-361d80fcfe02" (UID: "e5be4690-d775-4d30-91ae-361d80fcfe02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.712246 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5be4690-d775-4d30-91ae-361d80fcfe02" (UID: "e5be4690-d775-4d30-91ae-361d80fcfe02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.716473 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r" (OuterVolumeSpecName: "kube-api-access-c6h9r") pod "e5be4690-d775-4d30-91ae-361d80fcfe02" (UID: "e5be4690-d775-4d30-91ae-361d80fcfe02"). InnerVolumeSpecName "kube-api-access-c6h9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.802846 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6h9r\" (UniqueName: \"kubernetes.io/projected/e5be4690-d775-4d30-91ae-361d80fcfe02-kube-api-access-c6h9r\") on node \"crc\" DevicePath \"\"" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.802919 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be4690-d775-4d30-91ae-361d80fcfe02-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:30:03 crc kubenswrapper[5024]: I1007 13:30:03.802933 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5be4690-d775-4d30-91ae-361d80fcfe02-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.089689 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" event={"ID":"e5be4690-d775-4d30-91ae-361d80fcfe02","Type":"ContainerDied","Data":"7939c92a34d556f941212c0765dda634064cec3510f9e8279a522088f6789583"} Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.089737 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7939c92a34d556f941212c0765dda634064cec3510f9e8279a522088f6789583" Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.090173 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330730-d6x5h" Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.607867 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x"] Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.649051 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330685-rh22x"] Oct 07 13:30:04 crc kubenswrapper[5024]: I1007 13:30:04.766550 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fa621d-3db6-4bda-b380-1c8972b3005b" path="/var/lib/kubelet/pods/51fa621d-3db6-4bda-b380-1c8972b3005b/volumes" Oct 07 13:30:13 crc kubenswrapper[5024]: I1007 13:30:13.720797 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:30:13 crc kubenswrapper[5024]: I1007 13:30:13.723409 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:30:43 crc kubenswrapper[5024]: I1007 13:30:43.720270 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:30:43 crc kubenswrapper[5024]: I1007 13:30:43.720779 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:30:43 crc kubenswrapper[5024]: I1007 13:30:43.720832 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:30:43 crc kubenswrapper[5024]: I1007 13:30:43.722012 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:30:43 crc kubenswrapper[5024]: I1007 13:30:43.722065 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" gracePeriod=600 Oct 07 13:30:43 crc kubenswrapper[5024]: E1007 13:30:43.857198 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:30:44 crc kubenswrapper[5024]: I1007 13:30:44.494634 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" exitCode=0 Oct 07 13:30:44 crc kubenswrapper[5024]: I1007 13:30:44.494687 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95"} Oct 07 13:30:44 crc kubenswrapper[5024]: I1007 13:30:44.495093 5024 scope.go:117] "RemoveContainer" containerID="d044cb36aa84e31c51266795b5076f0540682ca4ddf68579c24c9349196a1751" Oct 07 13:30:44 crc kubenswrapper[5024]: I1007 13:30:44.495927 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:30:44 crc kubenswrapper[5024]: E1007 13:30:44.496449 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:30:49 crc kubenswrapper[5024]: I1007 13:30:49.393473 5024 scope.go:117] "RemoveContainer" containerID="6a11951ea5219702334649faecc0783ad19d417d59a8fa3326ac9e95caadd9b2" Oct 07 13:30:58 crc kubenswrapper[5024]: I1007 13:30:58.752409 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:30:58 crc kubenswrapper[5024]: E1007 13:30:58.753554 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:31:06 crc kubenswrapper[5024]: E1007 13:31:06.974663 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Oct 07 13:31:09 crc kubenswrapper[5024]: I1007 13:31:09.751660 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:31:09 crc kubenswrapper[5024]: E1007 13:31:09.752749 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:31:24 crc kubenswrapper[5024]: I1007 13:31:24.753030 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:31:24 crc kubenswrapper[5024]: E1007 13:31:24.754006 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:31:39 crc kubenswrapper[5024]: I1007 13:31:39.752467 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:31:39 crc kubenswrapper[5024]: E1007 13:31:39.753452 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.681585 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:42 crc kubenswrapper[5024]: E1007 13:31:42.683572 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5be4690-d775-4d30-91ae-361d80fcfe02" containerName="collect-profiles" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.683608 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5be4690-d775-4d30-91ae-361d80fcfe02" containerName="collect-profiles" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.684201 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5be4690-d775-4d30-91ae-361d80fcfe02" containerName="collect-profiles" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.689679 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.694805 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.717045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.717121 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q97m\" (UniqueName: \"kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.717466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.821023 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.821469 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.821501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q97m\" (UniqueName: \"kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.821637 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.821920 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:42 crc kubenswrapper[5024]: I1007 13:31:42.841159 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q97m\" (UniqueName: \"kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m\") pod \"community-operators-jgz8r\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:43 crc kubenswrapper[5024]: I1007 13:31:43.063975 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:43 crc kubenswrapper[5024]: I1007 13:31:43.591078 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:44 crc kubenswrapper[5024]: I1007 13:31:44.186575 5024 generic.go:334] "Generic (PLEG): container finished" podID="53417345-8c17-450a-a5e7-92a877f2363f" containerID="124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64" exitCode=0 Oct 07 13:31:44 crc kubenswrapper[5024]: I1007 13:31:44.186756 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerDied","Data":"124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64"} Oct 07 13:31:44 crc kubenswrapper[5024]: I1007 13:31:44.188674 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerStarted","Data":"94d976aacfb7d0e3503066fa360e8522c965be23f305faaddb24f1c8ed00ab9c"} Oct 07 13:31:44 crc kubenswrapper[5024]: I1007 13:31:44.191392 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:31:46 crc kubenswrapper[5024]: I1007 13:31:46.235514 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerStarted","Data":"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be"} Oct 07 13:31:47 crc kubenswrapper[5024]: I1007 13:31:47.248406 5024 generic.go:334] "Generic (PLEG): container finished" podID="53417345-8c17-450a-a5e7-92a877f2363f" containerID="38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be" exitCode=0 Oct 07 13:31:47 crc kubenswrapper[5024]: I1007 13:31:47.248512 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerDied","Data":"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be"} Oct 07 13:31:49 crc kubenswrapper[5024]: I1007 13:31:49.268788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerStarted","Data":"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf"} Oct 07 13:31:49 crc kubenswrapper[5024]: I1007 13:31:49.295882 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgz8r" podStartSLOduration=3.419565534 podStartE2EDuration="7.29584628s" podCreationTimestamp="2025-10-07 13:31:42 +0000 UTC" firstStartedPulling="2025-10-07 13:31:44.190932617 +0000 UTC m=+3842.266719495" lastFinishedPulling="2025-10-07 13:31:48.067213403 +0000 UTC m=+3846.143000241" observedRunningTime="2025-10-07 13:31:49.291666699 +0000 UTC m=+3847.367453537" watchObservedRunningTime="2025-10-07 13:31:49.29584628 +0000 UTC m=+3847.371633148" Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.064690 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.066040 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.149700 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.370606 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.432525 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:53 crc kubenswrapper[5024]: I1007 13:31:53.752595 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:31:53 crc kubenswrapper[5024]: E1007 13:31:53.753129 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:31:55 crc kubenswrapper[5024]: I1007 13:31:55.330568 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgz8r" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="registry-server" containerID="cri-o://1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf" gracePeriod=2 Oct 07 13:31:55 crc kubenswrapper[5024]: I1007 13:31:55.952943 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.010014 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities\") pod \"53417345-8c17-450a-a5e7-92a877f2363f\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.010315 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content\") pod \"53417345-8c17-450a-a5e7-92a877f2363f\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.010502 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q97m\" (UniqueName: \"kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m\") pod \"53417345-8c17-450a-a5e7-92a877f2363f\" (UID: \"53417345-8c17-450a-a5e7-92a877f2363f\") " Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.012833 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities" (OuterVolumeSpecName: "utilities") pod "53417345-8c17-450a-a5e7-92a877f2363f" (UID: "53417345-8c17-450a-a5e7-92a877f2363f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.019602 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m" (OuterVolumeSpecName: "kube-api-access-2q97m") pod "53417345-8c17-450a-a5e7-92a877f2363f" (UID: "53417345-8c17-450a-a5e7-92a877f2363f"). InnerVolumeSpecName "kube-api-access-2q97m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.077706 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53417345-8c17-450a-a5e7-92a877f2363f" (UID: "53417345-8c17-450a-a5e7-92a877f2363f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.112382 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q97m\" (UniqueName: \"kubernetes.io/projected/53417345-8c17-450a-a5e7-92a877f2363f-kube-api-access-2q97m\") on node \"crc\" DevicePath \"\"" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.112433 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.112447 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53417345-8c17-450a-a5e7-92a877f2363f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.346355 5024 generic.go:334] "Generic (PLEG): container finished" podID="53417345-8c17-450a-a5e7-92a877f2363f" containerID="1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf" exitCode=0 Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.346460 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerDied","Data":"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf"} Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.346549 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz8r" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.347118 5024 scope.go:117] "RemoveContainer" containerID="1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.347090 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz8r" event={"ID":"53417345-8c17-450a-a5e7-92a877f2363f","Type":"ContainerDied","Data":"94d976aacfb7d0e3503066fa360e8522c965be23f305faaddb24f1c8ed00ab9c"} Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.393389 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.393585 5024 scope.go:117] "RemoveContainer" containerID="38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.401310 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgz8r"] Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.735726 5024 scope.go:117] "RemoveContainer" containerID="124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.766789 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53417345-8c17-450a-a5e7-92a877f2363f" path="/var/lib/kubelet/pods/53417345-8c17-450a-a5e7-92a877f2363f/volumes" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.772631 5024 scope.go:117] "RemoveContainer" containerID="1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf" Oct 07 13:31:56 crc kubenswrapper[5024]: E1007 13:31:56.773396 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf\": container with ID starting with 1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf not found: ID does not exist" containerID="1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.773456 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf"} err="failed to get container status \"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf\": rpc error: code = NotFound desc = could not find container \"1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf\": container with ID starting with 1ce772ec9ad852540277d8fd6c89b3ae8b358d094e10bc08053d43e658de0fdf not found: ID does not exist" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.773505 5024 scope.go:117] "RemoveContainer" containerID="38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be" Oct 07 13:31:56 crc kubenswrapper[5024]: E1007 13:31:56.774011 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be\": container with ID starting with 38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be not found: ID does not exist" containerID="38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.774055 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be"} err="failed to get container status \"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be\": rpc error: code = NotFound desc = could not find container \"38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be\": container with ID starting with 38a61c62761c73493ddb9e6fe5f9eaedd4403f225c6cb6ab7797fdf0956425be not found: ID does not exist" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.774085 5024 scope.go:117] "RemoveContainer" containerID="124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64" Oct 07 13:31:56 crc kubenswrapper[5024]: E1007 13:31:56.774402 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64\": container with ID starting with 124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64 not found: ID does not exist" containerID="124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64" Oct 07 13:31:56 crc kubenswrapper[5024]: I1007 13:31:56.774441 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64"} err="failed to get container status \"124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64\": rpc error: code = NotFound desc = could not find container \"124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64\": container with ID starting with 124bc6606ed7743f7c22c81e6ee857fb4a9a7ae18f10c9bde310a0e086a0fb64 not found: ID does not exist" Oct 07 13:32:04 crc kubenswrapper[5024]: I1007 13:32:04.751661 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:32:04 crc kubenswrapper[5024]: E1007 13:32:04.752522 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:32:12 crc kubenswrapper[5024]: I1007 13:32:12.045221 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-75p7j"] Oct 07 13:32:12 crc kubenswrapper[5024]: I1007 13:32:12.056747 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-75p7j"] Oct 07 13:32:12 crc kubenswrapper[5024]: I1007 13:32:12.778378 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c8aa1b-83b7-456a-a8be-770922c03068" path="/var/lib/kubelet/pods/66c8aa1b-83b7-456a-a8be-770922c03068/volumes" Oct 07 13:32:17 crc kubenswrapper[5024]: I1007 13:32:17.752256 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:32:17 crc kubenswrapper[5024]: E1007 13:32:17.753489 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:32:21 crc kubenswrapper[5024]: I1007 13:32:21.036860 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-75ff-account-create-5l2r7"] Oct 07 13:32:21 crc kubenswrapper[5024]: I1007 13:32:21.045230 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-75ff-account-create-5l2r7"] Oct 07 13:32:22 crc kubenswrapper[5024]: I1007 13:32:22.827612 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f6d105-987d-4f6b-b9cb-7613ec68c794" path="/var/lib/kubelet/pods/27f6d105-987d-4f6b-b9cb-7613ec68c794/volumes" Oct 07 13:32:31 crc kubenswrapper[5024]: I1007 13:32:31.752068 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:32:31 crc kubenswrapper[5024]: E1007 13:32:31.752993 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:32:44 crc kubenswrapper[5024]: I1007 13:32:44.752194 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:32:44 crc kubenswrapper[5024]: E1007 13:32:44.753099 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:32:49 crc kubenswrapper[5024]: I1007 13:32:49.483857 5024 scope.go:117] "RemoveContainer" containerID="21ea78c85e895b77ef72c288a5a169fbf080ed41efd1a385d05e57688ede3e53" Oct 07 13:32:49 crc kubenswrapper[5024]: I1007 13:32:49.536791 5024 scope.go:117] "RemoveContainer" containerID="cc349360be6e14b4a8dcad77aed9c517c2000836ad09188e31b75999acdae976" Oct 07 13:32:55 crc kubenswrapper[5024]: I1007 13:32:55.751864 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:32:55 crc kubenswrapper[5024]: E1007 13:32:55.752886 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:33:06 crc kubenswrapper[5024]: I1007 13:33:06.056065 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-6fppq"] Oct 07 13:33:06 crc kubenswrapper[5024]: I1007 13:33:06.066272 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-6fppq"] Oct 07 13:33:06 crc kubenswrapper[5024]: I1007 13:33:06.772991 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad68430-9c4b-4a22-b2a3-417a796af04b" path="/var/lib/kubelet/pods/aad68430-9c4b-4a22-b2a3-417a796af04b/volumes" Oct 07 13:33:07 crc kubenswrapper[5024]: I1007 13:33:07.752682 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:33:07 crc kubenswrapper[5024]: E1007 13:33:07.753114 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.353781 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:33:11 crc kubenswrapper[5024]: E1007 13:33:11.354980 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="extract-utilities" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.354999 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="extract-utilities" Oct 07 13:33:11 crc kubenswrapper[5024]: E1007 13:33:11.355043 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="extract-content" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.355051 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="extract-content" Oct 07 13:33:11 crc kubenswrapper[5024]: E1007 13:33:11.355076 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="registry-server" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.355084 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="registry-server" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.355381 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="53417345-8c17-450a-a5e7-92a877f2363f" containerName="registry-server" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.357649 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.375980 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.553340 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfcph\" (UniqueName: \"kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.553947 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.554000 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.656815 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.656899 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.657029 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfcph\" (UniqueName: \"kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.657511 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.657910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.689598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfcph\" (UniqueName: \"kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph\") pod \"redhat-operators-dt9vq\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:11 crc kubenswrapper[5024]: I1007 13:33:11.696240 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:12 crc kubenswrapper[5024]: I1007 13:33:12.205817 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:33:13 crc kubenswrapper[5024]: I1007 13:33:13.208226 5024 generic.go:334] "Generic (PLEG): container finished" podID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerID="63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4" exitCode=0 Oct 07 13:33:13 crc kubenswrapper[5024]: I1007 13:33:13.208877 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerDied","Data":"63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4"} Oct 07 13:33:13 crc kubenswrapper[5024]: I1007 13:33:13.208909 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerStarted","Data":"72a89a0897f7003d30db7561ab911b5e8b2965541e9a96654cd848e539db8f96"} Oct 07 13:33:16 crc kubenswrapper[5024]: I1007 13:33:16.241770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerStarted","Data":"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e"} Oct 07 13:33:19 crc kubenswrapper[5024]: I1007 13:33:19.753593 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:33:19 crc kubenswrapper[5024]: E1007 13:33:19.754818 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:33:29 crc kubenswrapper[5024]: I1007 13:33:29.401269 5024 generic.go:334] "Generic (PLEG): container finished" podID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerID="b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e" exitCode=0 Oct 07 13:33:29 crc kubenswrapper[5024]: I1007 13:33:29.401389 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerDied","Data":"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e"} Oct 07 13:33:31 crc kubenswrapper[5024]: I1007 13:33:31.752302 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:33:31 crc kubenswrapper[5024]: E1007 13:33:31.753765 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:33:34 crc kubenswrapper[5024]: I1007 13:33:34.458158 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerStarted","Data":"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da"} Oct 07 13:33:34 crc kubenswrapper[5024]: I1007 13:33:34.483324 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dt9vq" podStartSLOduration=5.371971747 podStartE2EDuration="23.483302909s" podCreationTimestamp="2025-10-07 13:33:11 +0000 UTC" firstStartedPulling="2025-10-07 13:33:13.210852522 +0000 UTC m=+3931.286639360" lastFinishedPulling="2025-10-07 13:33:31.322183664 +0000 UTC m=+3949.397970522" observedRunningTime="2025-10-07 13:33:34.476550843 +0000 UTC m=+3952.552337691" watchObservedRunningTime="2025-10-07 13:33:34.483302909 +0000 UTC m=+3952.559089747" Oct 07 13:33:41 crc kubenswrapper[5024]: I1007 13:33:41.697658 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:41 crc kubenswrapper[5024]: I1007 13:33:41.698316 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:33:42 crc kubenswrapper[5024]: I1007 13:33:42.790700 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dt9vq" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" probeResult="failure" output=< Oct 07 13:33:42 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:33:42 crc kubenswrapper[5024]: > Oct 07 13:33:44 crc kubenswrapper[5024]: I1007 13:33:44.753850 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:33:44 crc kubenswrapper[5024]: E1007 13:33:44.754758 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:33:49 crc kubenswrapper[5024]: I1007 13:33:49.677108 5024 scope.go:117] "RemoveContainer" containerID="49091536d96d3757b29ec84b40c9bad72868024751b83cbe1b55f663a15d3599" Oct 07 13:33:52 crc kubenswrapper[5024]: I1007 13:33:52.764488 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dt9vq" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" probeResult="failure" output=< Oct 07 13:33:52 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:33:52 crc kubenswrapper[5024]: > Oct 07 13:33:58 crc kubenswrapper[5024]: I1007 13:33:58.752475 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:33:58 crc kubenswrapper[5024]: E1007 13:33:58.753380 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:34:02 crc kubenswrapper[5024]: I1007 13:34:02.749744 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dt9vq" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" probeResult="failure" output=< Oct 07 13:34:02 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:34:02 crc kubenswrapper[5024]: > Oct 07 13:34:10 crc kubenswrapper[5024]: I1007 13:34:10.751896 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:34:10 crc kubenswrapper[5024]: E1007 13:34:10.752585 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:34:11 crc kubenswrapper[5024]: I1007 13:34:11.752814 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:34:11 crc kubenswrapper[5024]: I1007 13:34:11.809291 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:34:12 crc kubenswrapper[5024]: I1007 13:34:12.578580 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:34:12 crc kubenswrapper[5024]: I1007 13:34:12.910302 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dt9vq" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" containerID="cri-o://28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da" gracePeriod=2 Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.479698 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.669547 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content\") pod \"45103915-6f3a-4a77-9bce-9e16d34166f6\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.669705 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities\") pod \"45103915-6f3a-4a77-9bce-9e16d34166f6\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.670073 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfcph\" (UniqueName: \"kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph\") pod \"45103915-6f3a-4a77-9bce-9e16d34166f6\" (UID: \"45103915-6f3a-4a77-9bce-9e16d34166f6\") " Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.671179 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities" (OuterVolumeSpecName: "utilities") pod "45103915-6f3a-4a77-9bce-9e16d34166f6" (UID: "45103915-6f3a-4a77-9bce-9e16d34166f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.679533 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph" (OuterVolumeSpecName: "kube-api-access-hfcph") pod "45103915-6f3a-4a77-9bce-9e16d34166f6" (UID: "45103915-6f3a-4a77-9bce-9e16d34166f6"). InnerVolumeSpecName "kube-api-access-hfcph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.773408 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfcph\" (UniqueName: \"kubernetes.io/projected/45103915-6f3a-4a77-9bce-9e16d34166f6-kube-api-access-hfcph\") on node \"crc\" DevicePath \"\"" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.773456 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.802031 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45103915-6f3a-4a77-9bce-9e16d34166f6" (UID: "45103915-6f3a-4a77-9bce-9e16d34166f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.876048 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45103915-6f3a-4a77-9bce-9e16d34166f6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.927286 5024 generic.go:334] "Generic (PLEG): container finished" podID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerID="28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da" exitCode=0 Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.927371 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerDied","Data":"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da"} Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.927458 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt9vq" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.929040 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt9vq" event={"ID":"45103915-6f3a-4a77-9bce-9e16d34166f6","Type":"ContainerDied","Data":"72a89a0897f7003d30db7561ab911b5e8b2965541e9a96654cd848e539db8f96"} Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.929065 5024 scope.go:117] "RemoveContainer" containerID="28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.965812 5024 scope.go:117] "RemoveContainer" containerID="b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e" Oct 07 13:34:13 crc kubenswrapper[5024]: I1007 13:34:13.995356 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.011973 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dt9vq"] Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.014055 5024 scope.go:117] "RemoveContainer" containerID="63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.072213 5024 scope.go:117] "RemoveContainer" containerID="28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da" Oct 07 13:34:14 crc kubenswrapper[5024]: E1007 13:34:14.073631 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da\": container with ID starting with 28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da not found: ID does not exist" containerID="28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.073672 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da"} err="failed to get container status \"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da\": rpc error: code = NotFound desc = could not find container \"28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da\": container with ID starting with 28a30fe8977a890ae4c382be21b4c465d8e41f532fff11162c1cf641de4528da not found: ID does not exist" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.073701 5024 scope.go:117] "RemoveContainer" containerID="b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e" Oct 07 13:34:14 crc kubenswrapper[5024]: E1007 13:34:14.076758 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e\": container with ID starting with b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e not found: ID does not exist" containerID="b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.076808 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e"} err="failed to get container status \"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e\": rpc error: code = NotFound desc = could not find container \"b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e\": container with ID starting with b853df19e17155d052cd4f13e96f016c07838142d4eb8c51b9a9e9584769dc3e not found: ID does not exist" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.076842 5024 scope.go:117] "RemoveContainer" containerID="63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4" Oct 07 13:34:14 crc kubenswrapper[5024]: E1007 13:34:14.078108 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4\": container with ID starting with 63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4 not found: ID does not exist" containerID="63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.078181 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4"} err="failed to get container status \"63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4\": rpc error: code = NotFound desc = could not find container \"63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4\": container with ID starting with 63f2b5b72eab14e6b8ec947c131f9e77ee5ee22585780eae9cced58dd6f661b4 not found: ID does not exist" Oct 07 13:34:14 crc kubenswrapper[5024]: I1007 13:34:14.775381 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" path="/var/lib/kubelet/pods/45103915-6f3a-4a77-9bce-9e16d34166f6/volumes" Oct 07 13:34:24 crc kubenswrapper[5024]: I1007 13:34:24.752324 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:34:24 crc kubenswrapper[5024]: E1007 13:34:24.754528 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:34:37 crc kubenswrapper[5024]: I1007 13:34:37.752766 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:34:37 crc kubenswrapper[5024]: E1007 13:34:37.753887 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:34:52 crc kubenswrapper[5024]: I1007 13:34:52.758360 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:34:52 crc kubenswrapper[5024]: E1007 13:34:52.759446 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.433738 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:03 crc kubenswrapper[5024]: E1007 13:35:03.438557 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.438593 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" Oct 07 13:35:03 crc kubenswrapper[5024]: E1007 13:35:03.438631 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="extract-utilities" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.438639 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="extract-utilities" Oct 07 13:35:03 crc kubenswrapper[5024]: E1007 13:35:03.438656 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="extract-content" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.438662 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="extract-content" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.438870 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="45103915-6f3a-4a77-9bce-9e16d34166f6" containerName="registry-server" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.440402 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.447848 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.531555 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.531637 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.531705 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmhz\" (UniqueName: \"kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.633726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.634255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.634380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmhz\" (UniqueName: \"kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.635824 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.636321 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.664606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmhz\" (UniqueName: \"kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz\") pod \"certified-operators-txkz6\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:03 crc kubenswrapper[5024]: I1007 13:35:03.820828 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:04 crc kubenswrapper[5024]: I1007 13:35:04.337234 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:04 crc kubenswrapper[5024]: W1007 13:35:04.350126 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca3ed36_b7c3_4926_b0d6_2613cacde76f.slice/crio-721f10773cd3b212559630ebd4859128cf423e992a4da5c1a3716a02e801f086 WatchSource:0}: Error finding container 721f10773cd3b212559630ebd4859128cf423e992a4da5c1a3716a02e801f086: Status 404 returned error can't find the container with id 721f10773cd3b212559630ebd4859128cf423e992a4da5c1a3716a02e801f086 Oct 07 13:35:04 crc kubenswrapper[5024]: I1007 13:35:04.468513 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerStarted","Data":"721f10773cd3b212559630ebd4859128cf423e992a4da5c1a3716a02e801f086"} Oct 07 13:35:05 crc kubenswrapper[5024]: I1007 13:35:05.482568 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerID="8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697" exitCode=0 Oct 07 13:35:05 crc kubenswrapper[5024]: I1007 13:35:05.482686 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerDied","Data":"8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697"} Oct 07 13:35:06 crc kubenswrapper[5024]: I1007 13:35:06.752434 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:35:06 crc kubenswrapper[5024]: E1007 13:35:06.752989 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:35:07 crc kubenswrapper[5024]: I1007 13:35:07.511444 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerID="f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7" exitCode=0 Oct 07 13:35:07 crc kubenswrapper[5024]: I1007 13:35:07.511539 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerDied","Data":"f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7"} Oct 07 13:35:08 crc kubenswrapper[5024]: I1007 13:35:08.526079 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerStarted","Data":"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf"} Oct 07 13:35:08 crc kubenswrapper[5024]: I1007 13:35:08.551976 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txkz6" podStartSLOduration=3.011773758 podStartE2EDuration="5.551940147s" podCreationTimestamp="2025-10-07 13:35:03 +0000 UTC" firstStartedPulling="2025-10-07 13:35:05.48649336 +0000 UTC m=+4043.562280208" lastFinishedPulling="2025-10-07 13:35:08.026659729 +0000 UTC m=+4046.102446597" observedRunningTime="2025-10-07 13:35:08.551779112 +0000 UTC m=+4046.627565950" watchObservedRunningTime="2025-10-07 13:35:08.551940147 +0000 UTC m=+4046.627727005" Oct 07 13:35:13 crc kubenswrapper[5024]: I1007 13:35:13.822706 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:13 crc kubenswrapper[5024]: I1007 13:35:13.823943 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:13 crc kubenswrapper[5024]: I1007 13:35:13.912722 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:14 crc kubenswrapper[5024]: I1007 13:35:14.673951 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:14 crc kubenswrapper[5024]: I1007 13:35:14.747326 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:16 crc kubenswrapper[5024]: I1007 13:35:16.618121 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txkz6" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="registry-server" containerID="cri-o://0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf" gracePeriod=2 Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.278195 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.460556 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pmhz\" (UniqueName: \"kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz\") pod \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.460620 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content\") pod \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.460678 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities\") pod \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\" (UID: \"0ca3ed36-b7c3-4926-b0d6-2613cacde76f\") " Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.462496 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities" (OuterVolumeSpecName: "utilities") pod "0ca3ed36-b7c3-4926-b0d6-2613cacde76f" (UID: "0ca3ed36-b7c3-4926-b0d6-2613cacde76f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.474528 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz" (OuterVolumeSpecName: "kube-api-access-7pmhz") pod "0ca3ed36-b7c3-4926-b0d6-2613cacde76f" (UID: "0ca3ed36-b7c3-4926-b0d6-2613cacde76f"). InnerVolumeSpecName "kube-api-access-7pmhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.564436 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pmhz\" (UniqueName: \"kubernetes.io/projected/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-kube-api-access-7pmhz\") on node \"crc\" DevicePath \"\"" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.564487 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.632167 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerID="0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf" exitCode=0 Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.632231 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerDied","Data":"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf"} Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.632322 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txkz6" event={"ID":"0ca3ed36-b7c3-4926-b0d6-2613cacde76f","Type":"ContainerDied","Data":"721f10773cd3b212559630ebd4859128cf423e992a4da5c1a3716a02e801f086"} Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.632350 5024 scope.go:117] "RemoveContainer" containerID="0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.632354 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txkz6" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.669552 5024 scope.go:117] "RemoveContainer" containerID="f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.710096 5024 scope.go:117] "RemoveContainer" containerID="8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.752250 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:35:17 crc kubenswrapper[5024]: E1007 13:35:17.752925 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.781373 5024 scope.go:117] "RemoveContainer" containerID="0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf" Oct 07 13:35:17 crc kubenswrapper[5024]: E1007 13:35:17.782439 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf\": container with ID starting with 0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf not found: ID does not exist" containerID="0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.782580 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf"} err="failed to get container status \"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf\": rpc error: code = NotFound desc = could not find container \"0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf\": container with ID starting with 0576a726bf7b4c84786be0ef4814f065014a6a03f7b48a3e875f78fb22e724cf not found: ID does not exist" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.782633 5024 scope.go:117] "RemoveContainer" containerID="f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7" Oct 07 13:35:17 crc kubenswrapper[5024]: E1007 13:35:17.783457 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7\": container with ID starting with f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7 not found: ID does not exist" containerID="f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.783542 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7"} err="failed to get container status \"f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7\": rpc error: code = NotFound desc = could not find container \"f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7\": container with ID starting with f99d364f13b43799fec998b26acd003f1124f8170aba4bcaa408048f66cacbf7 not found: ID does not exist" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.783590 5024 scope.go:117] "RemoveContainer" containerID="8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697" Oct 07 13:35:17 crc kubenswrapper[5024]: E1007 13:35:17.784219 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697\": container with ID starting with 8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697 not found: ID does not exist" containerID="8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.784273 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697"} err="failed to get container status \"8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697\": rpc error: code = NotFound desc = could not find container \"8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697\": container with ID starting with 8dd38164ad0c696f5edb6aa82ee2f62398bcb59cf6771f75ddb905e89e304697 not found: ID does not exist" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.913710 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ca3ed36-b7c3-4926-b0d6-2613cacde76f" (UID: "0ca3ed36-b7c3-4926-b0d6-2613cacde76f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.975076 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca3ed36-b7c3-4926-b0d6-2613cacde76f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.977624 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:17 crc kubenswrapper[5024]: I1007 13:35:17.992478 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txkz6"] Oct 07 13:35:18 crc kubenswrapper[5024]: I1007 13:35:18.772961 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" path="/var/lib/kubelet/pods/0ca3ed36-b7c3-4926-b0d6-2613cacde76f/volumes" Oct 07 13:35:28 crc kubenswrapper[5024]: I1007 13:35:28.752505 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:35:28 crc kubenswrapper[5024]: E1007 13:35:28.753854 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:35:42 crc kubenswrapper[5024]: I1007 13:35:42.766067 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:35:42 crc kubenswrapper[5024]: E1007 13:35:42.767471 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:35:53 crc kubenswrapper[5024]: I1007 13:35:53.753717 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:35:55 crc kubenswrapper[5024]: I1007 13:35:55.074478 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79"} Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.692758 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:37 crc kubenswrapper[5024]: E1007 13:37:37.693821 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="registry-server" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.693836 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="registry-server" Oct 07 13:37:37 crc kubenswrapper[5024]: E1007 13:37:37.693872 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="extract-utilities" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.693879 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="extract-utilities" Oct 07 13:37:37 crc kubenswrapper[5024]: E1007 13:37:37.693891 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="extract-content" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.693898 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="extract-content" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.694334 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca3ed36-b7c3-4926-b0d6-2613cacde76f" containerName="registry-server" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.699947 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.716886 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.894563 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.895069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqblc\" (UniqueName: \"kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.895234 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.996884 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.996999 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqblc\" (UniqueName: \"kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.997029 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.997470 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:37 crc kubenswrapper[5024]: I1007 13:37:37.997532 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:38 crc kubenswrapper[5024]: I1007 13:37:38.023474 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqblc\" (UniqueName: \"kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc\") pod \"redhat-marketplace-jnjvt\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:38 crc kubenswrapper[5024]: I1007 13:37:38.043267 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:38 crc kubenswrapper[5024]: I1007 13:37:38.582810 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:39 crc kubenswrapper[5024]: I1007 13:37:39.222344 5024 generic.go:334] "Generic (PLEG): container finished" podID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerID="eac60109b4ef34a3ca6e55a1eaa7d6fcef003c4ef480a32c4b1ee88bb1039dd9" exitCode=0 Oct 07 13:37:39 crc kubenswrapper[5024]: I1007 13:37:39.222477 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerDied","Data":"eac60109b4ef34a3ca6e55a1eaa7d6fcef003c4ef480a32c4b1ee88bb1039dd9"} Oct 07 13:37:39 crc kubenswrapper[5024]: I1007 13:37:39.222679 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerStarted","Data":"548b096118b01b390ad2ba21fb205171a84b4bf1bf03d5416a8422c91dda31d4"} Oct 07 13:37:39 crc kubenswrapper[5024]: I1007 13:37:39.225517 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:37:41 crc kubenswrapper[5024]: I1007 13:37:41.250912 5024 generic.go:334] "Generic (PLEG): container finished" podID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerID="054c7e11da678fcc74597d0f789167ce72919ad1aa48f6f68afc333132f31619" exitCode=0 Oct 07 13:37:41 crc kubenswrapper[5024]: I1007 13:37:41.251024 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerDied","Data":"054c7e11da678fcc74597d0f789167ce72919ad1aa48f6f68afc333132f31619"} Oct 07 13:37:42 crc kubenswrapper[5024]: I1007 13:37:42.268603 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerStarted","Data":"48a7ec2734f50f9714cbafdf6137dee8edcd2a17ff2f804e26a34f49f690f687"} Oct 07 13:37:42 crc kubenswrapper[5024]: I1007 13:37:42.291871 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jnjvt" podStartSLOduration=2.769513023 podStartE2EDuration="5.291845356s" podCreationTimestamp="2025-10-07 13:37:37 +0000 UTC" firstStartedPulling="2025-10-07 13:37:39.225123131 +0000 UTC m=+4197.300909989" lastFinishedPulling="2025-10-07 13:37:41.747455494 +0000 UTC m=+4199.823242322" observedRunningTime="2025-10-07 13:37:42.284797332 +0000 UTC m=+4200.360584180" watchObservedRunningTime="2025-10-07 13:37:42.291845356 +0000 UTC m=+4200.367632194" Oct 07 13:37:48 crc kubenswrapper[5024]: I1007 13:37:48.043519 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:48 crc kubenswrapper[5024]: I1007 13:37:48.045917 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:48 crc kubenswrapper[5024]: I1007 13:37:48.101501 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:48 crc kubenswrapper[5024]: I1007 13:37:48.421104 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.011642 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.012682 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jnjvt" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="registry-server" containerID="cri-o://48a7ec2734f50f9714cbafdf6137dee8edcd2a17ff2f804e26a34f49f690f687" gracePeriod=2 Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.434263 5024 generic.go:334] "Generic (PLEG): container finished" podID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerID="48a7ec2734f50f9714cbafdf6137dee8edcd2a17ff2f804e26a34f49f690f687" exitCode=0 Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.434470 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerDied","Data":"48a7ec2734f50f9714cbafdf6137dee8edcd2a17ff2f804e26a34f49f690f687"} Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.707598 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.774001 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqblc\" (UniqueName: \"kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc\") pod \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.774335 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities\") pod \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.774619 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content\") pod \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\" (UID: \"4c97e4a5-9d3e-405f-9e29-ac27b0a85576\") " Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.775830 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities" (OuterVolumeSpecName: "utilities") pod "4c97e4a5-9d3e-405f-9e29-ac27b0a85576" (UID: "4c97e4a5-9d3e-405f-9e29-ac27b0a85576"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.785389 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc" (OuterVolumeSpecName: "kube-api-access-nqblc") pod "4c97e4a5-9d3e-405f-9e29-ac27b0a85576" (UID: "4c97e4a5-9d3e-405f-9e29-ac27b0a85576"). InnerVolumeSpecName "kube-api-access-nqblc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.801185 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c97e4a5-9d3e-405f-9e29-ac27b0a85576" (UID: "4c97e4a5-9d3e-405f-9e29-ac27b0a85576"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.877418 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.877462 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqblc\" (UniqueName: \"kubernetes.io/projected/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-kube-api-access-nqblc\") on node \"crc\" DevicePath \"\"" Oct 07 13:37:53 crc kubenswrapper[5024]: I1007 13:37:53.877478 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c97e4a5-9d3e-405f-9e29-ac27b0a85576-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.446238 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnjvt" event={"ID":"4c97e4a5-9d3e-405f-9e29-ac27b0a85576","Type":"ContainerDied","Data":"548b096118b01b390ad2ba21fb205171a84b4bf1bf03d5416a8422c91dda31d4"} Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.446745 5024 scope.go:117] "RemoveContainer" containerID="48a7ec2734f50f9714cbafdf6137dee8edcd2a17ff2f804e26a34f49f690f687" Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.446532 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnjvt" Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.468986 5024 scope.go:117] "RemoveContainer" containerID="054c7e11da678fcc74597d0f789167ce72919ad1aa48f6f68afc333132f31619" Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.501925 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.507209 5024 scope.go:117] "RemoveContainer" containerID="eac60109b4ef34a3ca6e55a1eaa7d6fcef003c4ef480a32c4b1ee88bb1039dd9" Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.511053 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnjvt"] Oct 07 13:37:54 crc kubenswrapper[5024]: I1007 13:37:54.768265 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" path="/var/lib/kubelet/pods/4c97e4a5-9d3e-405f-9e29-ac27b0a85576/volumes" Oct 07 13:38:13 crc kubenswrapper[5024]: I1007 13:38:13.720793 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:38:13 crc kubenswrapper[5024]: I1007 13:38:13.721580 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:38:43 crc kubenswrapper[5024]: I1007 13:38:43.723063 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:38:43 crc kubenswrapper[5024]: I1007 13:38:43.723706 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:39:13 crc kubenswrapper[5024]: I1007 13:39:13.720646 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:39:13 crc kubenswrapper[5024]: I1007 13:39:13.721587 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:39:13 crc kubenswrapper[5024]: I1007 13:39:13.721667 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:39:13 crc kubenswrapper[5024]: I1007 13:39:13.722976 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:39:13 crc kubenswrapper[5024]: I1007 13:39:13.723102 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79" gracePeriod=600 Oct 07 13:39:14 crc kubenswrapper[5024]: I1007 13:39:14.346706 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79" exitCode=0 Oct 07 13:39:14 crc kubenswrapper[5024]: I1007 13:39:14.347424 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79"} Oct 07 13:39:14 crc kubenswrapper[5024]: I1007 13:39:14.347472 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73"} Oct 07 13:39:14 crc kubenswrapper[5024]: I1007 13:39:14.347498 5024 scope.go:117] "RemoveContainer" containerID="37538c8aaa1bba160010eab19f68633bcbb990fe66a0dfcb113824703c50de95" Oct 07 13:41:43 crc kubenswrapper[5024]: I1007 13:41:43.720254 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:41:43 crc kubenswrapper[5024]: I1007 13:41:43.721031 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:42:13 crc kubenswrapper[5024]: I1007 13:42:13.720278 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:42:13 crc kubenswrapper[5024]: I1007 13:42:13.720978 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:42:43 crc kubenswrapper[5024]: I1007 13:42:43.720470 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:42:43 crc kubenswrapper[5024]: I1007 13:42:43.721169 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:42:43 crc kubenswrapper[5024]: I1007 13:42:43.721236 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:42:43 crc kubenswrapper[5024]: I1007 13:42:43.722459 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:42:43 crc kubenswrapper[5024]: I1007 13:42:43.722559 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" gracePeriod=600 Oct 07 13:42:43 crc kubenswrapper[5024]: E1007 13:42:43.854726 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:42:44 crc kubenswrapper[5024]: I1007 13:42:44.674677 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" exitCode=0 Oct 07 13:42:44 crc kubenswrapper[5024]: I1007 13:42:44.674777 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73"} Oct 07 13:42:44 crc kubenswrapper[5024]: I1007 13:42:44.675270 5024 scope.go:117] "RemoveContainer" containerID="867cd54ea731525d1cb2eec5449c48dc6bb40566f1f0eac2645de62b568d1f79" Oct 07 13:42:44 crc kubenswrapper[5024]: I1007 13:42:44.676516 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:42:44 crc kubenswrapper[5024]: E1007 13:42:44.678300 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:42:54 crc kubenswrapper[5024]: I1007 13:42:54.753221 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:42:54 crc kubenswrapper[5024]: E1007 13:42:54.754299 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:43:07 crc kubenswrapper[5024]: I1007 13:43:07.751900 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:43:07 crc kubenswrapper[5024]: E1007 13:43:07.752879 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:43:21 crc kubenswrapper[5024]: I1007 13:43:21.752013 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:43:21 crc kubenswrapper[5024]: E1007 13:43:21.753158 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:43:36 crc kubenswrapper[5024]: I1007 13:43:36.753284 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:43:36 crc kubenswrapper[5024]: E1007 13:43:36.754579 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:43:49 crc kubenswrapper[5024]: I1007 13:43:49.752424 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:43:49 crc kubenswrapper[5024]: E1007 13:43:49.753596 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:44:02 crc kubenswrapper[5024]: I1007 13:44:02.761021 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:44:02 crc kubenswrapper[5024]: E1007 13:44:02.762541 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:44:16 crc kubenswrapper[5024]: I1007 13:44:16.751629 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:44:16 crc kubenswrapper[5024]: E1007 13:44:16.752389 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.825825 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:44:26 crc kubenswrapper[5024]: E1007 13:44:26.826968 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="extract-content" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.826987 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="extract-content" Oct 07 13:44:26 crc kubenswrapper[5024]: E1007 13:44:26.827019 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="registry-server" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.827027 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="registry-server" Oct 07 13:44:26 crc kubenswrapper[5024]: E1007 13:44:26.827047 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="extract-utilities" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.827056 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="extract-utilities" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.827971 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c97e4a5-9d3e-405f-9e29-ac27b0a85576" containerName="registry-server" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.829748 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.854479 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.928304 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.928855 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx99v\" (UniqueName: \"kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:26 crc kubenswrapper[5024]: I1007 13:44:26.928967 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.030839 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.030983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx99v\" (UniqueName: \"kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.031033 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.031584 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.032220 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.050968 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx99v\" (UniqueName: \"kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v\") pod \"redhat-operators-d5fxt\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.167343 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.715905 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:44:27 crc kubenswrapper[5024]: I1007 13:44:27.855301 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerStarted","Data":"046b2695955971c2e8487cf4178f3493b4d846655500dc99bba21e30206b2898"} Oct 07 13:44:28 crc kubenswrapper[5024]: I1007 13:44:28.752699 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:44:28 crc kubenswrapper[5024]: E1007 13:44:28.753866 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:44:28 crc kubenswrapper[5024]: I1007 13:44:28.867538 5024 generic.go:334] "Generic (PLEG): container finished" podID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerID="4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f" exitCode=0 Oct 07 13:44:28 crc kubenswrapper[5024]: I1007 13:44:28.867635 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerDied","Data":"4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f"} Oct 07 13:44:28 crc kubenswrapper[5024]: I1007 13:44:28.870368 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:44:30 crc kubenswrapper[5024]: I1007 13:44:30.896121 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerStarted","Data":"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5"} Oct 07 13:44:32 crc kubenswrapper[5024]: I1007 13:44:32.915657 5024 generic.go:334] "Generic (PLEG): container finished" podID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerID="935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5" exitCode=0 Oct 07 13:44:32 crc kubenswrapper[5024]: I1007 13:44:32.915759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerDied","Data":"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5"} Oct 07 13:44:34 crc kubenswrapper[5024]: I1007 13:44:34.975564 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerStarted","Data":"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4"} Oct 07 13:44:34 crc kubenswrapper[5024]: I1007 13:44:34.993800 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d5fxt" podStartSLOduration=4.139147761 podStartE2EDuration="8.993779518s" podCreationTimestamp="2025-10-07 13:44:26 +0000 UTC" firstStartedPulling="2025-10-07 13:44:28.869845891 +0000 UTC m=+4606.945632779" lastFinishedPulling="2025-10-07 13:44:33.724477688 +0000 UTC m=+4611.800264536" observedRunningTime="2025-10-07 13:44:34.991169472 +0000 UTC m=+4613.066956310" watchObservedRunningTime="2025-10-07 13:44:34.993779518 +0000 UTC m=+4613.069566356" Oct 07 13:44:37 crc kubenswrapper[5024]: I1007 13:44:37.167907 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:37 crc kubenswrapper[5024]: I1007 13:44:37.168573 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:38 crc kubenswrapper[5024]: I1007 13:44:38.237958 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d5fxt" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="registry-server" probeResult="failure" output=< Oct 07 13:44:38 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Oct 07 13:44:38 crc kubenswrapper[5024]: > Oct 07 13:44:41 crc kubenswrapper[5024]: I1007 13:44:41.752012 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:44:41 crc kubenswrapper[5024]: E1007 13:44:41.754035 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:44:47 crc kubenswrapper[5024]: I1007 13:44:47.232292 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:47 crc kubenswrapper[5024]: I1007 13:44:47.326079 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:47 crc kubenswrapper[5024]: I1007 13:44:47.474082 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:44:49 crc kubenswrapper[5024]: I1007 13:44:49.129650 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d5fxt" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="registry-server" containerID="cri-o://e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4" gracePeriod=2 Oct 07 13:44:49 crc kubenswrapper[5024]: I1007 13:44:49.875191 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.055580 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content\") pod \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.055621 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities\") pod \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.055685 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx99v\" (UniqueName: \"kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v\") pod \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\" (UID: \"25bcf9ac-af98-4ac1-937c-b1f3f169b872\") " Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.057708 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities" (OuterVolumeSpecName: "utilities") pod "25bcf9ac-af98-4ac1-937c-b1f3f169b872" (UID: "25bcf9ac-af98-4ac1-937c-b1f3f169b872"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.070157 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v" (OuterVolumeSpecName: "kube-api-access-cx99v") pod "25bcf9ac-af98-4ac1-937c-b1f3f169b872" (UID: "25bcf9ac-af98-4ac1-937c-b1f3f169b872"). InnerVolumeSpecName "kube-api-access-cx99v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.140609 5024 generic.go:334] "Generic (PLEG): container finished" podID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerID="e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4" exitCode=0 Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.140656 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerDied","Data":"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4"} Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.140691 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5fxt" event={"ID":"25bcf9ac-af98-4ac1-937c-b1f3f169b872","Type":"ContainerDied","Data":"046b2695955971c2e8487cf4178f3493b4d846655500dc99bba21e30206b2898"} Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.140708 5024 scope.go:117] "RemoveContainer" containerID="e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.140844 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.158045 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.158074 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx99v\" (UniqueName: \"kubernetes.io/projected/25bcf9ac-af98-4ac1-937c-b1f3f169b872-kube-api-access-cx99v\") on node \"crc\" DevicePath \"\"" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.167076 5024 scope.go:117] "RemoveContainer" containerID="935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.167422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25bcf9ac-af98-4ac1-937c-b1f3f169b872" (UID: "25bcf9ac-af98-4ac1-937c-b1f3f169b872"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.189475 5024 scope.go:117] "RemoveContainer" containerID="4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.260047 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bcf9ac-af98-4ac1-937c-b1f3f169b872-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.947679 5024 scope.go:117] "RemoveContainer" containerID="e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4" Oct 07 13:44:50 crc kubenswrapper[5024]: E1007 13:44:50.948652 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4\": container with ID starting with e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4 not found: ID does not exist" containerID="e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.948722 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4"} err="failed to get container status \"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4\": rpc error: code = NotFound desc = could not find container \"e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4\": container with ID starting with e958aab214c99810023a30ec820a6a651da016a484ec11efa8b825c3e0c859d4 not found: ID does not exist" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.948757 5024 scope.go:117] "RemoveContainer" containerID="935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5" Oct 07 13:44:50 crc kubenswrapper[5024]: E1007 13:44:50.950259 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5\": container with ID starting with 935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5 not found: ID does not exist" containerID="935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.950297 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5"} err="failed to get container status \"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5\": rpc error: code = NotFound desc = could not find container \"935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5\": container with ID starting with 935e6d74485ae662827691a419cf3433e3d8bb9d7bb1a80ac02b8ba2908ed4d5 not found: ID does not exist" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.950324 5024 scope.go:117] "RemoveContainer" containerID="4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f" Oct 07 13:44:50 crc kubenswrapper[5024]: E1007 13:44:50.950639 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f\": container with ID starting with 4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f not found: ID does not exist" containerID="4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f" Oct 07 13:44:50 crc kubenswrapper[5024]: I1007 13:44:50.950674 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f"} err="failed to get container status \"4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f\": rpc error: code = NotFound desc = could not find container \"4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f\": container with ID starting with 4643cef35281ed05a464c6fba17335ba3e158f197fec7a3d7d338af51fdfd44f not found: ID does not exist" Oct 07 13:44:55 crc kubenswrapper[5024]: I1007 13:44:55.753180 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:44:55 crc kubenswrapper[5024]: E1007 13:44:55.754328 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.152293 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp"] Oct 07 13:45:00 crc kubenswrapper[5024]: E1007 13:45:00.153298 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="extract-utilities" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.153314 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="extract-utilities" Oct 07 13:45:00 crc kubenswrapper[5024]: E1007 13:45:00.153339 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="extract-content" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.153345 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="extract-content" Oct 07 13:45:00 crc kubenswrapper[5024]: E1007 13:45:00.153387 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="registry-server" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.153395 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="registry-server" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.153593 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" containerName="registry-server" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.154358 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.160064 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.160345 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.180611 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp"] Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.326262 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.326565 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.326585 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktf8\" (UniqueName: \"kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.429006 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.429085 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.429114 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktf8\" (UniqueName: \"kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.430067 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.441990 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.456605 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktf8\" (UniqueName: \"kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8\") pod \"collect-profiles-29330745-pccgp\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:00 crc kubenswrapper[5024]: I1007 13:45:00.481287 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:01 crc kubenswrapper[5024]: I1007 13:45:01.078016 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp"] Oct 07 13:45:01 crc kubenswrapper[5024]: I1007 13:45:01.271007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" event={"ID":"62da2606-18e3-4d79-9c00-04642135b7cd","Type":"ContainerStarted","Data":"2ab7953f1628f3d63cd51c61159bd36ecc2be67bf10379be2a0d1aae13fa1260"} Oct 07 13:45:02 crc kubenswrapper[5024]: I1007 13:45:02.319854 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" event={"ID":"62da2606-18e3-4d79-9c00-04642135b7cd","Type":"ContainerStarted","Data":"aa2873705cdd820d480c346c71af98d309b957897f74b37c28916f25abe867a4"} Oct 07 13:45:02 crc kubenswrapper[5024]: I1007 13:45:02.338289 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" podStartSLOduration=2.338267778 podStartE2EDuration="2.338267778s" podCreationTimestamp="2025-10-07 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-07 13:45:02.337487995 +0000 UTC m=+4640.413274833" watchObservedRunningTime="2025-10-07 13:45:02.338267778 +0000 UTC m=+4640.414054616" Oct 07 13:45:03 crc kubenswrapper[5024]: I1007 13:45:03.331787 5024 generic.go:334] "Generic (PLEG): container finished" podID="62da2606-18e3-4d79-9c00-04642135b7cd" containerID="aa2873705cdd820d480c346c71af98d309b957897f74b37c28916f25abe867a4" exitCode=0 Oct 07 13:45:03 crc kubenswrapper[5024]: I1007 13:45:03.331844 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" event={"ID":"62da2606-18e3-4d79-9c00-04642135b7cd","Type":"ContainerDied","Data":"aa2873705cdd820d480c346c71af98d309b957897f74b37c28916f25abe867a4"} Oct 07 13:45:04 crc kubenswrapper[5024]: I1007 13:45:04.859504 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.037350 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ktf8\" (UniqueName: \"kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8\") pod \"62da2606-18e3-4d79-9c00-04642135b7cd\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.037768 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume\") pod \"62da2606-18e3-4d79-9c00-04642135b7cd\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.037903 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume\") pod \"62da2606-18e3-4d79-9c00-04642135b7cd\" (UID: \"62da2606-18e3-4d79-9c00-04642135b7cd\") " Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.038928 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "62da2606-18e3-4d79-9c00-04642135b7cd" (UID: "62da2606-18e3-4d79-9c00-04642135b7cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.054486 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8" (OuterVolumeSpecName: "kube-api-access-2ktf8") pod "62da2606-18e3-4d79-9c00-04642135b7cd" (UID: "62da2606-18e3-4d79-9c00-04642135b7cd"). InnerVolumeSpecName "kube-api-access-2ktf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.057268 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62da2606-18e3-4d79-9c00-04642135b7cd" (UID: "62da2606-18e3-4d79-9c00-04642135b7cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.141268 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62da2606-18e3-4d79-9c00-04642135b7cd-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.141315 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62da2606-18e3-4d79-9c00-04642135b7cd-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.141328 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ktf8\" (UniqueName: \"kubernetes.io/projected/62da2606-18e3-4d79-9c00-04642135b7cd-kube-api-access-2ktf8\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.352920 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" event={"ID":"62da2606-18e3-4d79-9c00-04642135b7cd","Type":"ContainerDied","Data":"2ab7953f1628f3d63cd51c61159bd36ecc2be67bf10379be2a0d1aae13fa1260"} Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.352978 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab7953f1628f3d63cd51c61159bd36ecc2be67bf10379be2a0d1aae13fa1260" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.353051 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330745-pccgp" Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.422276 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5"] Oct 07 13:45:05 crc kubenswrapper[5024]: I1007 13:45:05.431383 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330700-88sd5"] Oct 07 13:45:06 crc kubenswrapper[5024]: I1007 13:45:06.774444 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ae6a17-23ee-4022-9fc7-1fca62434dcb" path="/var/lib/kubelet/pods/77ae6a17-23ee-4022-9fc7-1fca62434dcb/volumes" Oct 07 13:45:09 crc kubenswrapper[5024]: I1007 13:45:09.751133 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:45:09 crc kubenswrapper[5024]: E1007 13:45:09.751526 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:45:20 crc kubenswrapper[5024]: I1007 13:45:20.752334 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:45:20 crc kubenswrapper[5024]: E1007 13:45:20.753721 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:45:20 crc kubenswrapper[5024]: I1007 13:45:20.927801 5024 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod25bcf9ac-af98-4ac1-937c-b1f3f169b872"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod25bcf9ac-af98-4ac1-937c-b1f3f169b872] : Timed out while waiting for systemd to remove kubepods-burstable-pod25bcf9ac_af98_4ac1_937c_b1f3f169b872.slice" Oct 07 13:45:20 crc kubenswrapper[5024]: E1007 13:45:20.927889 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod25bcf9ac-af98-4ac1-937c-b1f3f169b872] : unable to destroy cgroup paths for cgroup [kubepods burstable pod25bcf9ac-af98-4ac1-937c-b1f3f169b872] : Timed out while waiting for systemd to remove kubepods-burstable-pod25bcf9ac_af98_4ac1_937c_b1f3f169b872.slice" pod="openshift-marketplace/redhat-operators-d5fxt" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" Oct 07 13:45:21 crc kubenswrapper[5024]: I1007 13:45:21.522276 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5fxt" Oct 07 13:45:21 crc kubenswrapper[5024]: I1007 13:45:21.565622 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:45:21 crc kubenswrapper[5024]: I1007 13:45:21.576806 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d5fxt"] Oct 07 13:45:22 crc kubenswrapper[5024]: I1007 13:45:22.770741 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bcf9ac-af98-4ac1-937c-b1f3f169b872" path="/var/lib/kubelet/pods/25bcf9ac-af98-4ac1-937c-b1f3f169b872/volumes" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.498167 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:23 crc kubenswrapper[5024]: E1007 13:45:23.499221 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62da2606-18e3-4d79-9c00-04642135b7cd" containerName="collect-profiles" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.499251 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="62da2606-18e3-4d79-9c00-04642135b7cd" containerName="collect-profiles" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.499680 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="62da2606-18e3-4d79-9c00-04642135b7cd" containerName="collect-profiles" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.502494 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.532221 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.646468 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vgdl\" (UniqueName: \"kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.646946 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.647038 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.749894 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.749976 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.750088 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vgdl\" (UniqueName: \"kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.750597 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.750597 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.786283 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vgdl\" (UniqueName: \"kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl\") pod \"certified-operators-5zz8z\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:23 crc kubenswrapper[5024]: I1007 13:45:23.851588 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:24 crc kubenswrapper[5024]: I1007 13:45:24.447343 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:24 crc kubenswrapper[5024]: I1007 13:45:24.569827 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerStarted","Data":"bccdd6161e25cde825978f57fcf37eeca8b0b95f0101bbea5b282debbd0cc819"} Oct 07 13:45:25 crc kubenswrapper[5024]: I1007 13:45:25.586239 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerID="7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714" exitCode=0 Oct 07 13:45:25 crc kubenswrapper[5024]: I1007 13:45:25.586722 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerDied","Data":"7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714"} Oct 07 13:45:27 crc kubenswrapper[5024]: I1007 13:45:27.610282 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerStarted","Data":"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00"} Oct 07 13:45:28 crc kubenswrapper[5024]: I1007 13:45:28.625346 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerID="8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00" exitCode=0 Oct 07 13:45:28 crc kubenswrapper[5024]: I1007 13:45:28.625474 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerDied","Data":"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00"} Oct 07 13:45:30 crc kubenswrapper[5024]: I1007 13:45:30.667274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerStarted","Data":"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087"} Oct 07 13:45:30 crc kubenswrapper[5024]: I1007 13:45:30.704916 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5zz8z" podStartSLOduration=3.699575424 podStartE2EDuration="7.704885393s" podCreationTimestamp="2025-10-07 13:45:23 +0000 UTC" firstStartedPulling="2025-10-07 13:45:25.589357575 +0000 UTC m=+4663.665144443" lastFinishedPulling="2025-10-07 13:45:29.594667544 +0000 UTC m=+4667.670454412" observedRunningTime="2025-10-07 13:45:30.699456576 +0000 UTC m=+4668.775243444" watchObservedRunningTime="2025-10-07 13:45:30.704885393 +0000 UTC m=+4668.780672241" Oct 07 13:45:32 crc kubenswrapper[5024]: I1007 13:45:32.757731 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:45:32 crc kubenswrapper[5024]: E1007 13:45:32.759025 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:45:33 crc kubenswrapper[5024]: I1007 13:45:33.852754 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:33 crc kubenswrapper[5024]: I1007 13:45:33.852830 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:34 crc kubenswrapper[5024]: I1007 13:45:34.151903 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:34 crc kubenswrapper[5024]: I1007 13:45:34.775030 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:34 crc kubenswrapper[5024]: I1007 13:45:34.838780 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:36 crc kubenswrapper[5024]: I1007 13:45:36.725929 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5zz8z" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="registry-server" containerID="cri-o://d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087" gracePeriod=2 Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.329352 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.476598 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities\") pod \"3b58c2f8-5be3-4aba-85f7-3129408b869f\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.476948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content\") pod \"3b58c2f8-5be3-4aba-85f7-3129408b869f\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.476990 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vgdl\" (UniqueName: \"kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl\") pod \"3b58c2f8-5be3-4aba-85f7-3129408b869f\" (UID: \"3b58c2f8-5be3-4aba-85f7-3129408b869f\") " Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.478101 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities" (OuterVolumeSpecName: "utilities") pod "3b58c2f8-5be3-4aba-85f7-3129408b869f" (UID: "3b58c2f8-5be3-4aba-85f7-3129408b869f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.484223 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl" (OuterVolumeSpecName: "kube-api-access-7vgdl") pod "3b58c2f8-5be3-4aba-85f7-3129408b869f" (UID: "3b58c2f8-5be3-4aba-85f7-3129408b869f"). InnerVolumeSpecName "kube-api-access-7vgdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.546526 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b58c2f8-5be3-4aba-85f7-3129408b869f" (UID: "3b58c2f8-5be3-4aba-85f7-3129408b869f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.579777 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.579827 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58c2f8-5be3-4aba-85f7-3129408b869f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.579844 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vgdl\" (UniqueName: \"kubernetes.io/projected/3b58c2f8-5be3-4aba-85f7-3129408b869f-kube-api-access-7vgdl\") on node \"crc\" DevicePath \"\"" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.739975 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerID="d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087" exitCode=0 Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.740029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerDied","Data":"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087"} Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.740048 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zz8z" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.740073 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zz8z" event={"ID":"3b58c2f8-5be3-4aba-85f7-3129408b869f","Type":"ContainerDied","Data":"bccdd6161e25cde825978f57fcf37eeca8b0b95f0101bbea5b282debbd0cc819"} Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.740112 5024 scope.go:117] "RemoveContainer" containerID="d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.777465 5024 scope.go:117] "RemoveContainer" containerID="8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.784538 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.796107 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5zz8z"] Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.838739 5024 scope.go:117] "RemoveContainer" containerID="7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.868952 5024 scope.go:117] "RemoveContainer" containerID="d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087" Oct 07 13:45:37 crc kubenswrapper[5024]: E1007 13:45:37.869398 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087\": container with ID starting with d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087 not found: ID does not exist" containerID="d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.869433 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087"} err="failed to get container status \"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087\": rpc error: code = NotFound desc = could not find container \"d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087\": container with ID starting with d620be8582f0857d7f96fed3f6ba65ffdf1c6230908e6bdbed8722acc0ca1087 not found: ID does not exist" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.869457 5024 scope.go:117] "RemoveContainer" containerID="8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00" Oct 07 13:45:37 crc kubenswrapper[5024]: E1007 13:45:37.869748 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00\": container with ID starting with 8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00 not found: ID does not exist" containerID="8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.869771 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00"} err="failed to get container status \"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00\": rpc error: code = NotFound desc = could not find container \"8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00\": container with ID starting with 8faae2749c83037e3c7973298da898f308cc1bbd216301f7c22137298a551c00 not found: ID does not exist" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.869786 5024 scope.go:117] "RemoveContainer" containerID="7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714" Oct 07 13:45:37 crc kubenswrapper[5024]: E1007 13:45:37.870027 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714\": container with ID starting with 7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714 not found: ID does not exist" containerID="7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714" Oct 07 13:45:37 crc kubenswrapper[5024]: I1007 13:45:37.870049 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714"} err="failed to get container status \"7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714\": rpc error: code = NotFound desc = could not find container \"7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714\": container with ID starting with 7a1be464fd9e4a73e94bd0b0836824d74fe1c0e8dd731315befea594b86e4714 not found: ID does not exist" Oct 07 13:45:38 crc kubenswrapper[5024]: I1007 13:45:38.782546 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" path="/var/lib/kubelet/pods/3b58c2f8-5be3-4aba-85f7-3129408b869f/volumes" Oct 07 13:45:44 crc kubenswrapper[5024]: I1007 13:45:44.753476 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:45:44 crc kubenswrapper[5024]: E1007 13:45:44.754409 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:45:50 crc kubenswrapper[5024]: I1007 13:45:50.065915 5024 scope.go:117] "RemoveContainer" containerID="35b48f40aeb7183af53c9b160b3c07c55890aebcf8285e2f04198f1f2435c93a" Oct 07 13:45:56 crc kubenswrapper[5024]: I1007 13:45:56.751730 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:45:56 crc kubenswrapper[5024]: E1007 13:45:56.752943 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:46:07 crc kubenswrapper[5024]: I1007 13:46:07.753447 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:46:07 crc kubenswrapper[5024]: E1007 13:46:07.755113 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:46:19 crc kubenswrapper[5024]: I1007 13:46:19.751986 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:46:19 crc kubenswrapper[5024]: E1007 13:46:19.752930 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:46:31 crc kubenswrapper[5024]: I1007 13:46:31.752447 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:46:31 crc kubenswrapper[5024]: E1007 13:46:31.754799 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:46:42 crc kubenswrapper[5024]: I1007 13:46:42.771604 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:46:42 crc kubenswrapper[5024]: E1007 13:46:42.772470 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:46:55 crc kubenswrapper[5024]: I1007 13:46:55.752213 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:46:55 crc kubenswrapper[5024]: E1007 13:46:55.753182 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:47:08 crc kubenswrapper[5024]: I1007 13:47:08.753193 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:47:08 crc kubenswrapper[5024]: E1007 13:47:08.754616 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:47:21 crc kubenswrapper[5024]: I1007 13:47:21.753634 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:47:21 crc kubenswrapper[5024]: E1007 13:47:21.755033 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:47:33 crc kubenswrapper[5024]: I1007 13:47:33.753649 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:47:33 crc kubenswrapper[5024]: E1007 13:47:33.755320 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.135768 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:41 crc kubenswrapper[5024]: E1007 13:47:41.137044 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="extract-utilities" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.137065 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="extract-utilities" Oct 07 13:47:41 crc kubenswrapper[5024]: E1007 13:47:41.137119 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="extract-content" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.137127 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="extract-content" Oct 07 13:47:41 crc kubenswrapper[5024]: E1007 13:47:41.137156 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="registry-server" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.137168 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="registry-server" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.137449 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b58c2f8-5be3-4aba-85f7-3129408b869f" containerName="registry-server" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.139401 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.156005 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.175915 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rdw\" (UniqueName: \"kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.175977 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.176199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.278760 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7rdw\" (UniqueName: \"kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.278852 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.278927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.279508 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.288863 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.303392 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7rdw\" (UniqueName: \"kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw\") pod \"redhat-marketplace-9jpbv\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.466696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:41 crc kubenswrapper[5024]: I1007 13:47:41.960645 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:41 crc kubenswrapper[5024]: W1007 13:47:41.973566 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748f23ab_aba3_4554_93fa_2e2421fb34bf.slice/crio-aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110 WatchSource:0}: Error finding container aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110: Status 404 returned error can't find the container with id aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110 Oct 07 13:47:42 crc kubenswrapper[5024]: I1007 13:47:42.192085 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerStarted","Data":"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687"} Oct 07 13:47:42 crc kubenswrapper[5024]: I1007 13:47:42.192187 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerStarted","Data":"aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110"} Oct 07 13:47:43 crc kubenswrapper[5024]: I1007 13:47:43.207917 5024 generic.go:334] "Generic (PLEG): container finished" podID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerID="0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687" exitCode=0 Oct 07 13:47:43 crc kubenswrapper[5024]: I1007 13:47:43.208053 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerDied","Data":"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687"} Oct 07 13:47:45 crc kubenswrapper[5024]: I1007 13:47:45.230942 5024 generic.go:334] "Generic (PLEG): container finished" podID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerID="74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867" exitCode=0 Oct 07 13:47:45 crc kubenswrapper[5024]: I1007 13:47:45.231032 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerDied","Data":"74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867"} Oct 07 13:47:47 crc kubenswrapper[5024]: I1007 13:47:47.256999 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerStarted","Data":"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969"} Oct 07 13:47:47 crc kubenswrapper[5024]: I1007 13:47:47.280981 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jpbv" podStartSLOduration=3.040698672 podStartE2EDuration="6.280955257s" podCreationTimestamp="2025-10-07 13:47:41 +0000 UTC" firstStartedPulling="2025-10-07 13:47:43.210357333 +0000 UTC m=+4801.286144181" lastFinishedPulling="2025-10-07 13:47:46.450613918 +0000 UTC m=+4804.526400766" observedRunningTime="2025-10-07 13:47:47.276455147 +0000 UTC m=+4805.352241985" watchObservedRunningTime="2025-10-07 13:47:47.280955257 +0000 UTC m=+4805.356742105" Oct 07 13:47:48 crc kubenswrapper[5024]: I1007 13:47:48.752870 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:47:49 crc kubenswrapper[5024]: I1007 13:47:49.283641 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09"} Oct 07 13:47:51 crc kubenswrapper[5024]: I1007 13:47:51.467441 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:51 crc kubenswrapper[5024]: I1007 13:47:51.468174 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:51 crc kubenswrapper[5024]: I1007 13:47:51.535367 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:52 crc kubenswrapper[5024]: I1007 13:47:52.475184 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:52 crc kubenswrapper[5024]: I1007 13:47:52.541800 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:54 crc kubenswrapper[5024]: I1007 13:47:54.342076 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jpbv" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="registry-server" containerID="cri-o://dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969" gracePeriod=2 Oct 07 13:47:54 crc kubenswrapper[5024]: I1007 13:47:54.862938 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.010248 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content\") pod \"748f23ab-aba3-4554-93fa-2e2421fb34bf\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.010409 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7rdw\" (UniqueName: \"kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw\") pod \"748f23ab-aba3-4554-93fa-2e2421fb34bf\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.010453 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities\") pod \"748f23ab-aba3-4554-93fa-2e2421fb34bf\" (UID: \"748f23ab-aba3-4554-93fa-2e2421fb34bf\") " Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.011905 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities" (OuterVolumeSpecName: "utilities") pod "748f23ab-aba3-4554-93fa-2e2421fb34bf" (UID: "748f23ab-aba3-4554-93fa-2e2421fb34bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.027215 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748f23ab-aba3-4554-93fa-2e2421fb34bf" (UID: "748f23ab-aba3-4554-93fa-2e2421fb34bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.032610 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw" (OuterVolumeSpecName: "kube-api-access-h7rdw") pod "748f23ab-aba3-4554-93fa-2e2421fb34bf" (UID: "748f23ab-aba3-4554-93fa-2e2421fb34bf"). InnerVolumeSpecName "kube-api-access-h7rdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.113858 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.113900 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7rdw\" (UniqueName: \"kubernetes.io/projected/748f23ab-aba3-4554-93fa-2e2421fb34bf-kube-api-access-h7rdw\") on node \"crc\" DevicePath \"\"" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.113914 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748f23ab-aba3-4554-93fa-2e2421fb34bf-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.355423 5024 generic.go:334] "Generic (PLEG): container finished" podID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerID="dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969" exitCode=0 Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.355490 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jpbv" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.355480 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerDied","Data":"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969"} Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.355913 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jpbv" event={"ID":"748f23ab-aba3-4554-93fa-2e2421fb34bf","Type":"ContainerDied","Data":"aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110"} Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.355951 5024 scope.go:117] "RemoveContainer" containerID="dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.388779 5024 scope.go:117] "RemoveContainer" containerID="74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.401163 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.412950 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jpbv"] Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.436277 5024 scope.go:117] "RemoveContainer" containerID="0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.461041 5024 scope.go:117] "RemoveContainer" containerID="dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969" Oct 07 13:47:55 crc kubenswrapper[5024]: E1007 13:47:55.461954 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969\": container with ID starting with dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969 not found: ID does not exist" containerID="dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.462009 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969"} err="failed to get container status \"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969\": rpc error: code = NotFound desc = could not find container \"dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969\": container with ID starting with dfc26c1eade500f656705d7b2edca7ac948cd0383f1e1bda5bf8e2fed0bf7969 not found: ID does not exist" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.462047 5024 scope.go:117] "RemoveContainer" containerID="74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867" Oct 07 13:47:55 crc kubenswrapper[5024]: E1007 13:47:55.462406 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867\": container with ID starting with 74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867 not found: ID does not exist" containerID="74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.462445 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867"} err="failed to get container status \"74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867\": rpc error: code = NotFound desc = could not find container \"74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867\": container with ID starting with 74cec39a2497767e9a9ec92036e8c53b82677082dcc6f999e728d01aba3a5867 not found: ID does not exist" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.462471 5024 scope.go:117] "RemoveContainer" containerID="0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687" Oct 07 13:47:55 crc kubenswrapper[5024]: E1007 13:47:55.463048 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687\": container with ID starting with 0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687 not found: ID does not exist" containerID="0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687" Oct 07 13:47:55 crc kubenswrapper[5024]: I1007 13:47:55.463083 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687"} err="failed to get container status \"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687\": rpc error: code = NotFound desc = could not find container \"0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687\": container with ID starting with 0977a785b3dd4edc7fbfc91673e747ac13fb15870a4a0e54f09faf8b5a7c8687 not found: ID does not exist" Oct 07 13:47:55 crc kubenswrapper[5024]: E1007 13:47:55.491792 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748f23ab_aba3_4554_93fa_2e2421fb34bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748f23ab_aba3_4554_93fa_2e2421fb34bf.slice/crio-aed6f859a3b86c81e6dbfee742a46e672501e681d7faae02fb8f0626ef78a110\": RecentStats: unable to find data in memory cache]" Oct 07 13:47:56 crc kubenswrapper[5024]: I1007 13:47:56.763169 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" path="/var/lib/kubelet/pods/748f23ab-aba3-4554-93fa-2e2421fb34bf/volumes" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.289096 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:48:47 crc kubenswrapper[5024]: E1007 13:48:47.290114 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="extract-utilities" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.290153 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="extract-utilities" Oct 07 13:48:47 crc kubenswrapper[5024]: E1007 13:48:47.290176 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="extract-content" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.290183 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="extract-content" Oct 07 13:48:47 crc kubenswrapper[5024]: E1007 13:48:47.290219 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="registry-server" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.290225 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="registry-server" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.290397 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="748f23ab-aba3-4554-93fa-2e2421fb34bf" containerName="registry-server" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.291885 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.303406 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.392285 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwkz\" (UniqueName: \"kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.392506 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.392571 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.495030 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.495214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.495300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwkz\" (UniqueName: \"kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.495617 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.495698 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.537354 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwkz\" (UniqueName: \"kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz\") pod \"community-operators-zbs4d\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:47 crc kubenswrapper[5024]: I1007 13:48:47.628682 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:48 crc kubenswrapper[5024]: I1007 13:48:48.223677 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:48:48 crc kubenswrapper[5024]: I1007 13:48:48.952531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerStarted","Data":"c13dd19a46afaea26e8ffbbabc2f6390698d967cc80f9ed5c61ef62c35afd314"} Oct 07 13:48:49 crc kubenswrapper[5024]: I1007 13:48:49.967070 5024 generic.go:334] "Generic (PLEG): container finished" podID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerID="3b43aca02abe4e327a917276e89824cb153077b6cf1b61473556b2d2cf168320" exitCode=0 Oct 07 13:48:49 crc kubenswrapper[5024]: I1007 13:48:49.967165 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerDied","Data":"3b43aca02abe4e327a917276e89824cb153077b6cf1b61473556b2d2cf168320"} Oct 07 13:48:51 crc kubenswrapper[5024]: I1007 13:48:51.994983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerStarted","Data":"125c7f9b26edd8d7431b9fdd38b5ee13e441e8d151fba96bb2292b6116cc3105"} Oct 07 13:48:53 crc kubenswrapper[5024]: I1007 13:48:53.009201 5024 generic.go:334] "Generic (PLEG): container finished" podID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerID="125c7f9b26edd8d7431b9fdd38b5ee13e441e8d151fba96bb2292b6116cc3105" exitCode=0 Oct 07 13:48:53 crc kubenswrapper[5024]: I1007 13:48:53.009317 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerDied","Data":"125c7f9b26edd8d7431b9fdd38b5ee13e441e8d151fba96bb2292b6116cc3105"} Oct 07 13:48:54 crc kubenswrapper[5024]: I1007 13:48:54.026802 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerStarted","Data":"73e4a563c75064efdcb428684d5c192196b735d4577ad6586ce681f9d27facdf"} Oct 07 13:48:54 crc kubenswrapper[5024]: I1007 13:48:54.051531 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zbs4d" podStartSLOduration=3.342179463 podStartE2EDuration="7.051510729s" podCreationTimestamp="2025-10-07 13:48:47 +0000 UTC" firstStartedPulling="2025-10-07 13:48:49.969636299 +0000 UTC m=+4868.045423167" lastFinishedPulling="2025-10-07 13:48:53.678967565 +0000 UTC m=+4871.754754433" observedRunningTime="2025-10-07 13:48:54.050466799 +0000 UTC m=+4872.126253637" watchObservedRunningTime="2025-10-07 13:48:54.051510729 +0000 UTC m=+4872.127297567" Oct 07 13:48:57 crc kubenswrapper[5024]: I1007 13:48:57.629945 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:57 crc kubenswrapper[5024]: I1007 13:48:57.630794 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:57 crc kubenswrapper[5024]: I1007 13:48:57.689868 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:58 crc kubenswrapper[5024]: I1007 13:48:58.138528 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:48:58 crc kubenswrapper[5024]: I1007 13:48:58.476446 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:49:00 crc kubenswrapper[5024]: I1007 13:49:00.092454 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zbs4d" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="registry-server" containerID="cri-o://73e4a563c75064efdcb428684d5c192196b735d4577ad6586ce681f9d27facdf" gracePeriod=2 Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.106342 5024 generic.go:334] "Generic (PLEG): container finished" podID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerID="73e4a563c75064efdcb428684d5c192196b735d4577ad6586ce681f9d27facdf" exitCode=0 Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.106392 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerDied","Data":"73e4a563c75064efdcb428684d5c192196b735d4577ad6586ce681f9d27facdf"} Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.456516 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.547932 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwkz\" (UniqueName: \"kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz\") pod \"c1543335-4a9b-41f9-ae98-3671218b76e3\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.548248 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content\") pod \"c1543335-4a9b-41f9-ae98-3671218b76e3\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.548303 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities\") pod \"c1543335-4a9b-41f9-ae98-3671218b76e3\" (UID: \"c1543335-4a9b-41f9-ae98-3671218b76e3\") " Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.549877 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities" (OuterVolumeSpecName: "utilities") pod "c1543335-4a9b-41f9-ae98-3671218b76e3" (UID: "c1543335-4a9b-41f9-ae98-3671218b76e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.561894 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz" (OuterVolumeSpecName: "kube-api-access-ddwkz") pod "c1543335-4a9b-41f9-ae98-3671218b76e3" (UID: "c1543335-4a9b-41f9-ae98-3671218b76e3"). InnerVolumeSpecName "kube-api-access-ddwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.602661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1543335-4a9b-41f9-ae98-3671218b76e3" (UID: "c1543335-4a9b-41f9-ae98-3671218b76e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.650686 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.650725 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1543335-4a9b-41f9-ae98-3671218b76e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:49:01 crc kubenswrapper[5024]: I1007 13:49:01.650738 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwkz\" (UniqueName: \"kubernetes.io/projected/c1543335-4a9b-41f9-ae98-3671218b76e3-kube-api-access-ddwkz\") on node \"crc\" DevicePath \"\"" Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.119599 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbs4d" event={"ID":"c1543335-4a9b-41f9-ae98-3671218b76e3","Type":"ContainerDied","Data":"c13dd19a46afaea26e8ffbbabc2f6390698d967cc80f9ed5c61ef62c35afd314"} Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.119710 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbs4d" Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.121034 5024 scope.go:117] "RemoveContainer" containerID="73e4a563c75064efdcb428684d5c192196b735d4577ad6586ce681f9d27facdf" Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.161992 5024 scope.go:117] "RemoveContainer" containerID="125c7f9b26edd8d7431b9fdd38b5ee13e441e8d151fba96bb2292b6116cc3105" Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.168929 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.184279 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zbs4d"] Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.218941 5024 scope.go:117] "RemoveContainer" containerID="3b43aca02abe4e327a917276e89824cb153077b6cf1b61473556b2d2cf168320" Oct 07 13:49:02 crc kubenswrapper[5024]: I1007 13:49:02.778315 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" path="/var/lib/kubelet/pods/c1543335-4a9b-41f9-ae98-3671218b76e3/volumes" Oct 07 13:50:13 crc kubenswrapper[5024]: I1007 13:50:13.720675 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:50:13 crc kubenswrapper[5024]: I1007 13:50:13.721321 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:50:43 crc kubenswrapper[5024]: I1007 13:50:43.721036 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:50:43 crc kubenswrapper[5024]: I1007 13:50:43.721926 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:51:13 crc kubenswrapper[5024]: I1007 13:51:13.720700 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:51:13 crc kubenswrapper[5024]: I1007 13:51:13.721414 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:51:13 crc kubenswrapper[5024]: I1007 13:51:13.721469 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:51:13 crc kubenswrapper[5024]: I1007 13:51:13.722563 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:51:13 crc kubenswrapper[5024]: I1007 13:51:13.722629 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09" gracePeriod=600 Oct 07 13:51:14 crc kubenswrapper[5024]: I1007 13:51:14.637224 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09" exitCode=0 Oct 07 13:51:14 crc kubenswrapper[5024]: I1007 13:51:14.637296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09"} Oct 07 13:51:14 crc kubenswrapper[5024]: I1007 13:51:14.637912 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599"} Oct 07 13:51:14 crc kubenswrapper[5024]: I1007 13:51:14.637971 5024 scope.go:117] "RemoveContainer" containerID="995ad31f863af31cdd59dfcb952b071be4c55780d5fb28769e59faa2a8500f73" Oct 07 13:53:43 crc kubenswrapper[5024]: I1007 13:53:43.720704 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:53:43 crc kubenswrapper[5024]: I1007 13:53:43.723232 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:54:13 crc kubenswrapper[5024]: I1007 13:54:13.721069 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:54:13 crc kubenswrapper[5024]: I1007 13:54:13.723429 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.298310 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:39 crc kubenswrapper[5024]: E1007 13:54:39.299535 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="extract-content" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.299564 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="extract-content" Oct 07 13:54:39 crc kubenswrapper[5024]: E1007 13:54:39.299588 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="registry-server" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.299594 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="registry-server" Oct 07 13:54:39 crc kubenswrapper[5024]: E1007 13:54:39.299626 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="extract-utilities" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.299633 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="extract-utilities" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.299849 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1543335-4a9b-41f9-ae98-3671218b76e3" containerName="registry-server" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.301467 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.316086 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.323870 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cgw\" (UniqueName: \"kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.323955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.324020 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.426317 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cgw\" (UniqueName: \"kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.426421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.426469 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.427094 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.427273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.456993 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cgw\" (UniqueName: \"kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw\") pod \"redhat-operators-z5pq2\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:39 crc kubenswrapper[5024]: I1007 13:54:39.630230 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:40 crc kubenswrapper[5024]: I1007 13:54:40.158563 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:41 crc kubenswrapper[5024]: I1007 13:54:41.101598 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerDied","Data":"81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea"} Oct 07 13:54:41 crc kubenswrapper[5024]: I1007 13:54:41.110764 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 13:54:41 crc kubenswrapper[5024]: I1007 13:54:41.101418 5024 generic.go:334] "Generic (PLEG): container finished" podID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerID="81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea" exitCode=0 Oct 07 13:54:41 crc kubenswrapper[5024]: I1007 13:54:41.110874 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerStarted","Data":"90a5a24560786010b73f416ee919229bdb6c5ca05da60f652964d83d25893fd4"} Oct 07 13:54:42 crc kubenswrapper[5024]: I1007 13:54:42.120009 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerStarted","Data":"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156"} Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.136477 5024 generic.go:334] "Generic (PLEG): container finished" podID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerID="10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156" exitCode=0 Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.136569 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerDied","Data":"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156"} Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.720399 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.720498 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.720563 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.721721 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 13:54:43 crc kubenswrapper[5024]: I1007 13:54:43.721805 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" gracePeriod=600 Oct 07 13:54:43 crc kubenswrapper[5024]: E1007 13:54:43.878637 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.149381 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerStarted","Data":"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166"} Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.153789 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" exitCode=0 Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.153823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599"} Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.153852 5024 scope.go:117] "RemoveContainer" containerID="7827aa34827db81de10dcfe39b279331f1c031e07b9d44bfa50d92e0956aff09" Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.154214 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:54:44 crc kubenswrapper[5024]: E1007 13:54:44.154493 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:54:44 crc kubenswrapper[5024]: I1007 13:54:44.181634 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5pq2" podStartSLOduration=2.630816253 podStartE2EDuration="5.181609525s" podCreationTimestamp="2025-10-07 13:54:39 +0000 UTC" firstStartedPulling="2025-10-07 13:54:41.110205284 +0000 UTC m=+5219.185992142" lastFinishedPulling="2025-10-07 13:54:43.660998576 +0000 UTC m=+5221.736785414" observedRunningTime="2025-10-07 13:54:44.176353013 +0000 UTC m=+5222.252139871" watchObservedRunningTime="2025-10-07 13:54:44.181609525 +0000 UTC m=+5222.257396363" Oct 07 13:54:49 crc kubenswrapper[5024]: I1007 13:54:49.630533 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:49 crc kubenswrapper[5024]: I1007 13:54:49.631382 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:49 crc kubenswrapper[5024]: I1007 13:54:49.690174 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:50 crc kubenswrapper[5024]: I1007 13:54:50.309030 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:50 crc kubenswrapper[5024]: I1007 13:54:50.378125 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.256310 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5pq2" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="registry-server" containerID="cri-o://e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166" gracePeriod=2 Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.884500 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.945623 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6cgw\" (UniqueName: \"kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw\") pod \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.945808 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content\") pod \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.945862 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities\") pod \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\" (UID: \"c603ee73-cfab-4564-9f7d-7a8dbfa05c63\") " Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.947284 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities" (OuterVolumeSpecName: "utilities") pod "c603ee73-cfab-4564-9f7d-7a8dbfa05c63" (UID: "c603ee73-cfab-4564-9f7d-7a8dbfa05c63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.950312 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:54:52 crc kubenswrapper[5024]: I1007 13:54:52.954168 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw" (OuterVolumeSpecName: "kube-api-access-j6cgw") pod "c603ee73-cfab-4564-9f7d-7a8dbfa05c63" (UID: "c603ee73-cfab-4564-9f7d-7a8dbfa05c63"). InnerVolumeSpecName "kube-api-access-j6cgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.052774 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6cgw\" (UniqueName: \"kubernetes.io/projected/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-kube-api-access-j6cgw\") on node \"crc\" DevicePath \"\"" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.270082 5024 generic.go:334] "Generic (PLEG): container finished" podID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerID="e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166" exitCode=0 Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.270155 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerDied","Data":"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166"} Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.271429 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5pq2" event={"ID":"c603ee73-cfab-4564-9f7d-7a8dbfa05c63","Type":"ContainerDied","Data":"90a5a24560786010b73f416ee919229bdb6c5ca05da60f652964d83d25893fd4"} Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.271493 5024 scope.go:117] "RemoveContainer" containerID="e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.270251 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5pq2" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.299621 5024 scope.go:117] "RemoveContainer" containerID="10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.344047 5024 scope.go:117] "RemoveContainer" containerID="81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.399325 5024 scope.go:117] "RemoveContainer" containerID="e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166" Oct 07 13:54:53 crc kubenswrapper[5024]: E1007 13:54:53.400019 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166\": container with ID starting with e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166 not found: ID does not exist" containerID="e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.400084 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166"} err="failed to get container status \"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166\": rpc error: code = NotFound desc = could not find container \"e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166\": container with ID starting with e3ae67a5b56b7d65468f0949e87188b861cce5d757a37d3da466fee30b155166 not found: ID does not exist" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.400143 5024 scope.go:117] "RemoveContainer" containerID="10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156" Oct 07 13:54:53 crc kubenswrapper[5024]: E1007 13:54:53.401695 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156\": container with ID starting with 10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156 not found: ID does not exist" containerID="10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.401741 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156"} err="failed to get container status \"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156\": rpc error: code = NotFound desc = could not find container \"10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156\": container with ID starting with 10b688df805531958f35e953105e0cc48c2fd0661b7eeff426dd24b356c21156 not found: ID does not exist" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.401770 5024 scope.go:117] "RemoveContainer" containerID="81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea" Oct 07 13:54:53 crc kubenswrapper[5024]: E1007 13:54:53.402726 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea\": container with ID starting with 81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea not found: ID does not exist" containerID="81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.402774 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea"} err="failed to get container status \"81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea\": rpc error: code = NotFound desc = could not find container \"81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea\": container with ID starting with 81117d8e8912223e5ae706d4433d8399d080962dc48dfcb696a129be54e3adea not found: ID does not exist" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.492243 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c603ee73-cfab-4564-9f7d-7a8dbfa05c63" (UID: "c603ee73-cfab-4564-9f7d-7a8dbfa05c63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.562846 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603ee73-cfab-4564-9f7d-7a8dbfa05c63-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.610244 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:53 crc kubenswrapper[5024]: I1007 13:54:53.624221 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5pq2"] Oct 07 13:54:54 crc kubenswrapper[5024]: I1007 13:54:54.762050 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" path="/var/lib/kubelet/pods/c603ee73-cfab-4564-9f7d-7a8dbfa05c63/volumes" Oct 07 13:54:55 crc kubenswrapper[5024]: I1007 13:54:55.752036 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:54:55 crc kubenswrapper[5024]: E1007 13:54:55.752909 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:55:06 crc kubenswrapper[5024]: I1007 13:55:06.752172 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:55:06 crc kubenswrapper[5024]: E1007 13:55:06.754076 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:55:21 crc kubenswrapper[5024]: I1007 13:55:21.753214 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:55:21 crc kubenswrapper[5024]: E1007 13:55:21.754729 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:55:32 crc kubenswrapper[5024]: I1007 13:55:32.760547 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:55:32 crc kubenswrapper[5024]: E1007 13:55:32.761584 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.917151 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:35 crc kubenswrapper[5024]: E1007 13:55:35.918280 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="extract-content" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.918295 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="extract-content" Oct 07 13:55:35 crc kubenswrapper[5024]: E1007 13:55:35.918317 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="extract-utilities" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.918324 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="extract-utilities" Oct 07 13:55:35 crc kubenswrapper[5024]: E1007 13:55:35.918358 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="registry-server" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.918364 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="registry-server" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.918560 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c603ee73-cfab-4564-9f7d-7a8dbfa05c63" containerName="registry-server" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.920044 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:35 crc kubenswrapper[5024]: I1007 13:55:35.934036 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.068431 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.068496 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tfsn\" (UniqueName: \"kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.068865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.171080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tfsn\" (UniqueName: \"kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.171917 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.172250 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.172466 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.172727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.700723 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tfsn\" (UniqueName: \"kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn\") pod \"certified-operators-hjq9m\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:36 crc kubenswrapper[5024]: I1007 13:55:36.859366 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:37 crc kubenswrapper[5024]: I1007 13:55:37.408517 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:37 crc kubenswrapper[5024]: I1007 13:55:37.754445 5024 generic.go:334] "Generic (PLEG): container finished" podID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerID="94dfd576712c5adceee028264d65007a52408ec6c9b8f972c4b8dbcca9505dc0" exitCode=0 Oct 07 13:55:37 crc kubenswrapper[5024]: I1007 13:55:37.754513 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerDied","Data":"94dfd576712c5adceee028264d65007a52408ec6c9b8f972c4b8dbcca9505dc0"} Oct 07 13:55:37 crc kubenswrapper[5024]: I1007 13:55:37.754791 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerStarted","Data":"0600a3412c60ec84f4c08173309726678db630451f7330f6021206116a173c51"} Oct 07 13:55:39 crc kubenswrapper[5024]: I1007 13:55:39.777493 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerStarted","Data":"ebd204f046fe2b40373c182920e98360738be3983e42beb303b91041b860adb9"} Oct 07 13:55:40 crc kubenswrapper[5024]: I1007 13:55:40.813640 5024 generic.go:334] "Generic (PLEG): container finished" podID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerID="ebd204f046fe2b40373c182920e98360738be3983e42beb303b91041b860adb9" exitCode=0 Oct 07 13:55:40 crc kubenswrapper[5024]: I1007 13:55:40.813738 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerDied","Data":"ebd204f046fe2b40373c182920e98360738be3983e42beb303b91041b860adb9"} Oct 07 13:55:41 crc kubenswrapper[5024]: I1007 13:55:41.842584 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerStarted","Data":"4b814ad2185470d5054bd707c5a92b64e35fea895ff255dbfa8e660274b4e493"} Oct 07 13:55:41 crc kubenswrapper[5024]: I1007 13:55:41.880546 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hjq9m" podStartSLOduration=3.350382969 podStartE2EDuration="6.880509161s" podCreationTimestamp="2025-10-07 13:55:35 +0000 UTC" firstStartedPulling="2025-10-07 13:55:37.75771711 +0000 UTC m=+5275.833503948" lastFinishedPulling="2025-10-07 13:55:41.287843262 +0000 UTC m=+5279.363630140" observedRunningTime="2025-10-07 13:55:41.872335195 +0000 UTC m=+5279.948122073" watchObservedRunningTime="2025-10-07 13:55:41.880509161 +0000 UTC m=+5279.956296029" Oct 07 13:55:43 crc kubenswrapper[5024]: I1007 13:55:43.751411 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:55:43 crc kubenswrapper[5024]: E1007 13:55:43.752040 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:55:46 crc kubenswrapper[5024]: I1007 13:55:46.860297 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:46 crc kubenswrapper[5024]: I1007 13:55:46.861029 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:46 crc kubenswrapper[5024]: I1007 13:55:46.908503 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:46 crc kubenswrapper[5024]: I1007 13:55:46.973435 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:47 crc kubenswrapper[5024]: I1007 13:55:47.145535 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:48 crc kubenswrapper[5024]: I1007 13:55:48.907721 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hjq9m" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="registry-server" containerID="cri-o://4b814ad2185470d5054bd707c5a92b64e35fea895ff255dbfa8e660274b4e493" gracePeriod=2 Oct 07 13:55:49 crc kubenswrapper[5024]: I1007 13:55:49.926818 5024 generic.go:334] "Generic (PLEG): container finished" podID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerID="4b814ad2185470d5054bd707c5a92b64e35fea895ff255dbfa8e660274b4e493" exitCode=0 Oct 07 13:55:49 crc kubenswrapper[5024]: I1007 13:55:49.926879 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerDied","Data":"4b814ad2185470d5054bd707c5a92b64e35fea895ff255dbfa8e660274b4e493"} Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.118047 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.222023 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities\") pod \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.222325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tfsn\" (UniqueName: \"kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn\") pod \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.222546 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content\") pod \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\" (UID: \"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1\") " Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.222880 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities" (OuterVolumeSpecName: "utilities") pod "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" (UID: "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.223553 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.285397 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" (UID: "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.292230 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn" (OuterVolumeSpecName: "kube-api-access-7tfsn") pod "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" (UID: "c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1"). InnerVolumeSpecName "kube-api-access-7tfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.325732 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tfsn\" (UniqueName: \"kubernetes.io/projected/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-kube-api-access-7tfsn\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.326182 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.944511 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjq9m" event={"ID":"c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1","Type":"ContainerDied","Data":"0600a3412c60ec84f4c08173309726678db630451f7330f6021206116a173c51"} Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.944591 5024 scope.go:117] "RemoveContainer" containerID="4b814ad2185470d5054bd707c5a92b64e35fea895ff255dbfa8e660274b4e493" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.944813 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjq9m" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.980998 5024 scope.go:117] "RemoveContainer" containerID="ebd204f046fe2b40373c182920e98360738be3983e42beb303b91041b860adb9" Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.982946 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:50 crc kubenswrapper[5024]: I1007 13:55:50.992621 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hjq9m"] Oct 07 13:55:51 crc kubenswrapper[5024]: I1007 13:55:51.003243 5024 scope.go:117] "RemoveContainer" containerID="94dfd576712c5adceee028264d65007a52408ec6c9b8f972c4b8dbcca9505dc0" Oct 07 13:55:52 crc kubenswrapper[5024]: I1007 13:55:52.768255 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" path="/var/lib/kubelet/pods/c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1/volumes" Oct 07 13:55:52 crc kubenswrapper[5024]: I1007 13:55:52.974751 5024 generic.go:334] "Generic (PLEG): container finished" podID="c269d1e5-beee-4868-949d-eb84e0d44521" containerID="225b62f8c4deedb403267f5dc83a294ca8061adede0f195967692088714e2436" exitCode=1 Oct 07 13:55:52 crc kubenswrapper[5024]: I1007 13:55:52.974816 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c269d1e5-beee-4868-949d-eb84e0d44521","Type":"ContainerDied","Data":"225b62f8c4deedb403267f5dc83a294ca8061adede0f195967692088714e2436"} Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.382436 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.539537 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540102 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540155 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540331 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540491 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540855 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.540924 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vmd\" (UniqueName: \"kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd\") pod \"c269d1e5-beee-4868-949d-eb84e0d44521\" (UID: \"c269d1e5-beee-4868-949d-eb84e0d44521\") " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.541648 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.542359 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data" (OuterVolumeSpecName: "config-data") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.543016 5024 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.543110 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.551741 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.552431 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.559416 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd" (OuterVolumeSpecName: "kube-api-access-n7vmd") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "kube-api-access-n7vmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.586305 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.600916 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.603498 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.612716 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c269d1e5-beee-4868-949d-eb84e0d44521" (UID: "c269d1e5-beee-4868-949d-eb84e0d44521"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645168 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645229 5024 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c269d1e5-beee-4868-949d-eb84e0d44521-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645247 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7vmd\" (UniqueName: \"kubernetes.io/projected/c269d1e5-beee-4868-949d-eb84e0d44521-kube-api-access-n7vmd\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645261 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645310 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645326 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c269d1e5-beee-4868-949d-eb84e0d44521-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.645338 5024 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c269d1e5-beee-4868-949d-eb84e0d44521-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.667930 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.747501 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.997767 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c269d1e5-beee-4868-949d-eb84e0d44521","Type":"ContainerDied","Data":"6b0aa77d3e52b6dc124cd6f75a2fca79fc2c06a993b3c8affb3557d0d6d033e5"} Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.997835 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b0aa77d3e52b6dc124cd6f75a2fca79fc2c06a993b3c8affb3557d0d6d033e5" Oct 07 13:55:54 crc kubenswrapper[5024]: I1007 13:55:54.997901 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 07 13:55:55 crc kubenswrapper[5024]: I1007 13:55:55.755052 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:55:55 crc kubenswrapper[5024]: E1007 13:55:55.756523 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.389635 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 07 13:56:07 crc kubenswrapper[5024]: E1007 13:56:07.391080 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="registry-server" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391112 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="registry-server" Oct 07 13:56:07 crc kubenswrapper[5024]: E1007 13:56:07.391192 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="extract-utilities" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391206 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="extract-utilities" Oct 07 13:56:07 crc kubenswrapper[5024]: E1007 13:56:07.391233 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="extract-content" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391245 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="extract-content" Oct 07 13:56:07 crc kubenswrapper[5024]: E1007 13:56:07.391299 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c269d1e5-beee-4868-949d-eb84e0d44521" containerName="tempest-tests-tempest-tests-runner" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391313 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c269d1e5-beee-4868-949d-eb84e0d44521" containerName="tempest-tests-tempest-tests-runner" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391692 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a243d0-9a7a-45ab-87d8-8ae8b5cd8cd1" containerName="registry-server" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.391756 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c269d1e5-beee-4868-949d-eb84e0d44521" containerName="tempest-tests-tempest-tests-runner" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.392998 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.395607 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-65b5w" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.409722 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.471782 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhswn\" (UniqueName: \"kubernetes.io/projected/95da0c9c-e0dd-44db-bdd4-6d441e64762f-kube-api-access-qhswn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.471871 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.574865 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhswn\" (UniqueName: \"kubernetes.io/projected/95da0c9c-e0dd-44db-bdd4-6d441e64762f-kube-api-access-qhswn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.574947 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:07 crc kubenswrapper[5024]: I1007 13:56:07.575661 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:08 crc kubenswrapper[5024]: I1007 13:56:08.205037 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhswn\" (UniqueName: \"kubernetes.io/projected/95da0c9c-e0dd-44db-bdd4-6d441e64762f-kube-api-access-qhswn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:08 crc kubenswrapper[5024]: I1007 13:56:08.254216 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"95da0c9c-e0dd-44db-bdd4-6d441e64762f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:08 crc kubenswrapper[5024]: I1007 13:56:08.357571 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 07 13:56:08 crc kubenswrapper[5024]: I1007 13:56:08.752347 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:56:08 crc kubenswrapper[5024]: E1007 13:56:08.753698 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:56:08 crc kubenswrapper[5024]: I1007 13:56:08.910478 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 07 13:56:09 crc kubenswrapper[5024]: I1007 13:56:09.163929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"95da0c9c-e0dd-44db-bdd4-6d441e64762f","Type":"ContainerStarted","Data":"5fcd6668a8a79a5afab2366d352f442e17d5c9dc7e998b45a97a18ebf7b5e137"} Oct 07 13:56:10 crc kubenswrapper[5024]: I1007 13:56:10.176576 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"95da0c9c-e0dd-44db-bdd4-6d441e64762f","Type":"ContainerStarted","Data":"1791672b92b6cfb568b24cac78333b99eaf96be39105dc59236e29f742c9d593"} Oct 07 13:56:10 crc kubenswrapper[5024]: I1007 13:56:10.201602 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.37596442 podStartE2EDuration="3.201569642s" podCreationTimestamp="2025-10-07 13:56:07 +0000 UTC" firstStartedPulling="2025-10-07 13:56:08.923237841 +0000 UTC m=+5306.999024689" lastFinishedPulling="2025-10-07 13:56:09.748843033 +0000 UTC m=+5307.824629911" observedRunningTime="2025-10-07 13:56:10.193220191 +0000 UTC m=+5308.269007059" watchObservedRunningTime="2025-10-07 13:56:10.201569642 +0000 UTC m=+5308.277356520" Oct 07 13:56:21 crc kubenswrapper[5024]: I1007 13:56:21.752348 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:56:21 crc kubenswrapper[5024]: E1007 13:56:21.753844 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:56:35 crc kubenswrapper[5024]: I1007 13:56:35.752432 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:56:35 crc kubenswrapper[5024]: E1007 13:56:35.753740 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:56:47 crc kubenswrapper[5024]: I1007 13:56:47.752689 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:56:47 crc kubenswrapper[5024]: E1007 13:56:47.754094 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.033695 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jrhdn/must-gather-wc46w"] Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.036077 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.041966 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jrhdn"/"default-dockercfg-rckc2" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.042262 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jrhdn"/"openshift-service-ca.crt" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.042390 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jrhdn"/"kube-root-ca.crt" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.055965 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jrhdn/must-gather-wc46w"] Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.118689 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2t6\" (UniqueName: \"kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.118752 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.220719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc2t6\" (UniqueName: \"kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.220803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:52 crc kubenswrapper[5024]: I1007 13:56:52.221343 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:53 crc kubenswrapper[5024]: I1007 13:56:53.200936 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc2t6\" (UniqueName: \"kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6\") pod \"must-gather-wc46w\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:53 crc kubenswrapper[5024]: I1007 13:56:53.263472 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 13:56:53 crc kubenswrapper[5024]: W1007 13:56:53.804855 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd4bc398_eaaa_436a_8ab3_9b4fff8afb84.slice/crio-4e37f6eef31a6b62f65a3951664faf2c31e07b52c670fc58e9bfaa78282cb674 WatchSource:0}: Error finding container 4e37f6eef31a6b62f65a3951664faf2c31e07b52c670fc58e9bfaa78282cb674: Status 404 returned error can't find the container with id 4e37f6eef31a6b62f65a3951664faf2c31e07b52c670fc58e9bfaa78282cb674 Oct 07 13:56:53 crc kubenswrapper[5024]: I1007 13:56:53.808003 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jrhdn/must-gather-wc46w"] Oct 07 13:56:54 crc kubenswrapper[5024]: I1007 13:56:54.733530 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/must-gather-wc46w" event={"ID":"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84","Type":"ContainerStarted","Data":"4e37f6eef31a6b62f65a3951664faf2c31e07b52c670fc58e9bfaa78282cb674"} Oct 07 13:56:59 crc kubenswrapper[5024]: I1007 13:56:59.818706 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/must-gather-wc46w" event={"ID":"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84","Type":"ContainerStarted","Data":"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf"} Oct 07 13:56:59 crc kubenswrapper[5024]: I1007 13:56:59.819481 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/must-gather-wc46w" event={"ID":"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84","Type":"ContainerStarted","Data":"3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2"} Oct 07 13:56:59 crc kubenswrapper[5024]: I1007 13:56:59.844306 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jrhdn/must-gather-wc46w" podStartSLOduration=3.20451256 podStartE2EDuration="7.844282174s" podCreationTimestamp="2025-10-07 13:56:52 +0000 UTC" firstStartedPulling="2025-10-07 13:56:53.807166174 +0000 UTC m=+5351.882953012" lastFinishedPulling="2025-10-07 13:56:58.446935788 +0000 UTC m=+5356.522722626" observedRunningTime="2025-10-07 13:56:59.838814966 +0000 UTC m=+5357.914601874" watchObservedRunningTime="2025-10-07 13:56:59.844282174 +0000 UTC m=+5357.920069012" Oct 07 13:57:01 crc kubenswrapper[5024]: E1007 13:57:01.529709 5024 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.130:38722->38.102.83.130:39253: read tcp 38.102.83.130:38722->38.102.83.130:39253: read: connection reset by peer Oct 07 13:57:01 crc kubenswrapper[5024]: I1007 13:57:01.751675 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:57:01 crc kubenswrapper[5024]: E1007 13:57:01.752073 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.010500 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-tfn6l"] Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.014221 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.023870 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.024068 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgs65\" (UniqueName: \"kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.126827 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgs65\" (UniqueName: \"kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.127412 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.127639 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.156030 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgs65\" (UniqueName: \"kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65\") pod \"crc-debug-tfn6l\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.331281 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:57:03 crc kubenswrapper[5024]: W1007 13:57:03.375664 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c14804_8a97_47ef_b141_4e55430ce37b.slice/crio-c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3 WatchSource:0}: Error finding container c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3: Status 404 returned error can't find the container with id c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3 Oct 07 13:57:03 crc kubenswrapper[5024]: I1007 13:57:03.859489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" event={"ID":"f5c14804-8a97-47ef-b141-4e55430ce37b","Type":"ContainerStarted","Data":"c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3"} Oct 07 13:57:14 crc kubenswrapper[5024]: I1007 13:57:14.752302 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:57:14 crc kubenswrapper[5024]: E1007 13:57:14.753308 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:57:14 crc kubenswrapper[5024]: I1007 13:57:14.967109 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" event={"ID":"f5c14804-8a97-47ef-b141-4e55430ce37b","Type":"ContainerStarted","Data":"5f57a597e3474973ec28fbbb1d09a978f7fa851587302befe36192657fdfee56"} Oct 07 13:57:14 crc kubenswrapper[5024]: I1007 13:57:14.985757 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" podStartSLOduration=2.030935958 podStartE2EDuration="12.985724175s" podCreationTimestamp="2025-10-07 13:57:02 +0000 UTC" firstStartedPulling="2025-10-07 13:57:03.378153904 +0000 UTC m=+5361.453940742" lastFinishedPulling="2025-10-07 13:57:14.332942101 +0000 UTC m=+5372.408728959" observedRunningTime="2025-10-07 13:57:14.980436682 +0000 UTC m=+5373.056223520" watchObservedRunningTime="2025-10-07 13:57:14.985724175 +0000 UTC m=+5373.061511013" Oct 07 13:57:27 crc kubenswrapper[5024]: I1007 13:57:27.751758 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:57:27 crc kubenswrapper[5024]: E1007 13:57:27.752718 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:57:41 crc kubenswrapper[5024]: I1007 13:57:41.751116 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:57:41 crc kubenswrapper[5024]: E1007 13:57:41.752056 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:57:56 crc kubenswrapper[5024]: I1007 13:57:56.752259 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:57:56 crc kubenswrapper[5024]: E1007 13:57:56.752945 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:58:07 crc kubenswrapper[5024]: I1007 13:58:07.754006 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:58:07 crc kubenswrapper[5024]: E1007 13:58:07.755065 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:58:16 crc kubenswrapper[5024]: I1007 13:58:16.028305 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b5d4dfdcd-m7phq_e7d0c7cb-ebe4-4183-a7e5-56627d4a5073/barbican-api/0.log" Oct 07 13:58:16 crc kubenswrapper[5024]: I1007 13:58:16.793028 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b5d4dfdcd-m7phq_e7d0c7cb-ebe4-4183-a7e5-56627d4a5073/barbican-api-log/0.log" Oct 07 13:58:16 crc kubenswrapper[5024]: I1007 13:58:16.968386 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7577458c58-jdh8c_fd4f1323-3ac0-4f6b-99fe-58ef913e21bd/barbican-keystone-listener/0.log" Oct 07 13:58:17 crc kubenswrapper[5024]: I1007 13:58:17.177668 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7577458c58-jdh8c_fd4f1323-3ac0-4f6b-99fe-58ef913e21bd/barbican-keystone-listener-log/0.log" Oct 07 13:58:17 crc kubenswrapper[5024]: I1007 13:58:17.384045 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d95969967-77m89_84128da8-a6ce-4984-8cdd-e9b6202196c8/barbican-worker/0.log" Oct 07 13:58:17 crc kubenswrapper[5024]: I1007 13:58:17.489007 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d95969967-77m89_84128da8-a6ce-4984-8cdd-e9b6202196c8/barbican-worker-log/0.log" Oct 07 13:58:17 crc kubenswrapper[5024]: I1007 13:58:17.743274 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7fmjt_410ef07b-0d0c-49c7-9e8d-a144b83b7bca/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:17 crc kubenswrapper[5024]: I1007 13:58:17.967428 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6891ff95-7d91-421e-a087-7cbf5fd9a6e9/ceilometer-central-agent/0.log" Oct 07 13:58:18 crc kubenswrapper[5024]: I1007 13:58:18.103345 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6891ff95-7d91-421e-a087-7cbf5fd9a6e9/ceilometer-notification-agent/0.log" Oct 07 13:58:18 crc kubenswrapper[5024]: I1007 13:58:18.148613 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6891ff95-7d91-421e-a087-7cbf5fd9a6e9/proxy-httpd/0.log" Oct 07 13:58:18 crc kubenswrapper[5024]: I1007 13:58:18.263158 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6891ff95-7d91-421e-a087-7cbf5fd9a6e9/sg-core/0.log" Oct 07 13:58:18 crc kubenswrapper[5024]: I1007 13:58:18.342554 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-l9m4c_ca2ffd93-5f0b-4bab-806a-a0b1a2bb0033/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:18 crc kubenswrapper[5024]: I1007 13:58:18.587716 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hwvbk_4270239b-72c9-4e60-938e-77db772605ed/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:19 crc kubenswrapper[5024]: I1007 13:58:19.356906 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55dc71d9-1e63-48ce-a93b-c540ec901312/cinder-api/0.log" Oct 07 13:58:19 crc kubenswrapper[5024]: I1007 13:58:19.436555 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55dc71d9-1e63-48ce-a93b-c540ec901312/cinder-api-log/0.log" Oct 07 13:58:19 crc kubenswrapper[5024]: I1007 13:58:19.580382 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_893e2de7-0613-4750-a8a3-6630394129aa/probe/0.log" Oct 07 13:58:19 crc kubenswrapper[5024]: I1007 13:58:19.845197 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8b2c18a7-32a0-4768-b60c-0d01da478a8a/cinder-scheduler/0.log" Oct 07 13:58:19 crc kubenswrapper[5024]: I1007 13:58:19.990286 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8b2c18a7-32a0-4768-b60c-0d01da478a8a/probe/0.log" Oct 07 13:58:20 crc kubenswrapper[5024]: I1007 13:58:20.311668 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d92bf1fe-58e8-4f42-bd82-bcde5acdf07e/probe/0.log" Oct 07 13:58:20 crc kubenswrapper[5024]: I1007 13:58:20.752410 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7bdp6_be08cfea-37bb-4ebf-b56a-678a1e73ee4e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:21 crc kubenswrapper[5024]: I1007 13:58:21.243831 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pbdc5_b17f8fee-5248-4b95-b6a3-35ea547dbb4c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:21 crc kubenswrapper[5024]: I1007 13:58:21.751321 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:58:21 crc kubenswrapper[5024]: E1007 13:58:21.751770 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:58:21 crc kubenswrapper[5024]: I1007 13:58:21.753275 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-d2sxl_82b8c291-f0c7-4944-8d91-46d7e4b16fb0/init/0.log" Oct 07 13:58:21 crc kubenswrapper[5024]: I1007 13:58:21.991810 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-d2sxl_82b8c291-f0c7-4944-8d91-46d7e4b16fb0/init/0.log" Oct 07 13:58:22 crc kubenswrapper[5024]: I1007 13:58:22.408916 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-d2sxl_82b8c291-f0c7-4944-8d91-46d7e4b16fb0/dnsmasq-dns/0.log" Oct 07 13:58:22 crc kubenswrapper[5024]: I1007 13:58:22.920619 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c1a01461-d836-42d8-99c4-6d21acf59856/glance-httpd/0.log" Oct 07 13:58:23 crc kubenswrapper[5024]: I1007 13:58:23.132792 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c1a01461-d836-42d8-99c4-6d21acf59856/glance-log/0.log" Oct 07 13:58:23 crc kubenswrapper[5024]: I1007 13:58:23.622061 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_893e2de7-0613-4750-a8a3-6630394129aa/cinder-backup/0.log" Oct 07 13:58:23 crc kubenswrapper[5024]: I1007 13:58:23.633883 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4962d6d9-454e-474e-a4ca-f998de4bf476/glance-httpd/0.log" Oct 07 13:58:23 crc kubenswrapper[5024]: I1007 13:58:23.809869 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4962d6d9-454e-474e-a4ca-f998de4bf476/glance-log/0.log" Oct 07 13:58:24 crc kubenswrapper[5024]: I1007 13:58:24.173964 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6748775596-w8q6s_19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c/horizon/0.log" Oct 07 13:58:24 crc kubenswrapper[5024]: I1007 13:58:24.577679 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6748775596-w8q6s_19e34a0d-00bb-4cf0-bfc3-7d38760b0f2c/horizon-log/0.log" Oct 07 13:58:25 crc kubenswrapper[5024]: I1007 13:58:25.298795 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-cvzhd_989605a8-0e53-4a59-9c1f-d927baa38ca6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:25 crc kubenswrapper[5024]: I1007 13:58:25.539943 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g49rf_82eb373d-d00e-4509-bbaf-728b2cb90b80/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:25 crc kubenswrapper[5024]: I1007 13:58:25.761517 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29330701-bsr5z_872b9e07-c892-450b-bc31-786916818a09/keystone-cron/0.log" Oct 07 13:58:26 crc kubenswrapper[5024]: I1007 13:58:26.499645 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_91789504-a808-4dab-99d4-a3ad6eb1751f/kube-state-metrics/0.log" Oct 07 13:58:26 crc kubenswrapper[5024]: I1007 13:58:26.989771 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d8dff8cf4-cpjpz_ea15930e-0c3f-421a-a9b8-a82399fa4e93/keystone-api/0.log" Oct 07 13:58:27 crc kubenswrapper[5024]: I1007 13:58:27.445261 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fwv5f_a204d343-554a-475d-8b5b-dc0a7d5a09c9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:27 crc kubenswrapper[5024]: I1007 13:58:27.824325 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_ccccf784-0422-43bf-9926-c887daea816f/manila-api/0.log" Oct 07 13:58:27 crc kubenswrapper[5024]: I1007 13:58:27.859277 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_ccccf784-0422-43bf-9926-c887daea816f/manila-api-log/0.log" Oct 07 13:58:28 crc kubenswrapper[5024]: I1007 13:58:28.219546 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_754c6c13-696f-4a69-ad50-ba23eb523d41/manila-scheduler/0.log" Oct 07 13:58:28 crc kubenswrapper[5024]: I1007 13:58:28.317938 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_754c6c13-696f-4a69-ad50-ba23eb523d41/probe/0.log" Oct 07 13:58:28 crc kubenswrapper[5024]: I1007 13:58:28.530484 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d92bf1fe-58e8-4f42-bd82-bcde5acdf07e/cinder-volume/0.log" Oct 07 13:58:28 crc kubenswrapper[5024]: I1007 13:58:28.582916 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1b21d961-b63f-40b0-b1f6-54562b4edcdb/manila-share/0.log" Oct 07 13:58:28 crc kubenswrapper[5024]: I1007 13:58:28.660730 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1b21d961-b63f-40b0-b1f6-54562b4edcdb/probe/0.log" Oct 07 13:58:29 crc kubenswrapper[5024]: I1007 13:58:29.305071 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6cf6bc68f7-lxxqd_fdc1f1c0-bece-42f0-b499-bdf645a1a4c9/neutron-httpd/0.log" Oct 07 13:58:29 crc kubenswrapper[5024]: I1007 13:58:29.308707 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6cf6bc68f7-lxxqd_fdc1f1c0-bece-42f0-b499-bdf645a1a4c9/neutron-api/0.log" Oct 07 13:58:29 crc kubenswrapper[5024]: I1007 13:58:29.474179 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-qqgj4_d967b331-7a34-4912-b2f8-93187b6d1c2e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.238222 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ffd37ac7-f3e0-4bb1-8756-7c2700af5cad/nova-cell0-conductor-conductor/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.391129 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f63c03fb-b295-4ba6-b385-842aebd5147f/nova-api-log/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.588368 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f63c03fb-b295-4ba6-b385-842aebd5147f/nova-api-api/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.699025 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c64c06f8-a7f8-45a5-ae11-14f952a5304c/nova-cell1-conductor-conductor/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.901780 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_04e53d1a-544a-462f-b10a-792855684c25/nova-cell1-novncproxy-novncproxy/0.log" Oct 07 13:58:30 crc kubenswrapper[5024]: I1007 13:58:30.995722 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lnszw_81a3838c-53ed-4367-8b5d-35295d94823c/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:31 crc kubenswrapper[5024]: I1007 13:58:31.259103 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8f153807-258c-4fd9-a4d7-d16ab555b74f/nova-metadata-log/0.log" Oct 07 13:58:31 crc kubenswrapper[5024]: I1007 13:58:31.756821 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d6967c30-2237-4e21-93c5-ae456e0383d6/nova-scheduler-scheduler/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.001027 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b493394-e353-45b2-b7a9-71b94654e2e7/mysql-bootstrap/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.123919 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b493394-e353-45b2-b7a9-71b94654e2e7/mysql-bootstrap/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.205680 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b493394-e353-45b2-b7a9-71b94654e2e7/galera/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.437125 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e4863f6-5bdf-407e-ab2c-a161223537cc/mysql-bootstrap/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.702463 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e4863f6-5bdf-407e-ab2c-a161223537cc/mysql-bootstrap/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.738971 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_7e4863f6-5bdf-407e-ab2c-a161223537cc/galera/0.log" Oct 07 13:58:32 crc kubenswrapper[5024]: I1007 13:58:32.942103 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7564967b-adb6-452b-b141-c91016f1c9d8/openstackclient/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.139582 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bs9lb_91fa5e61-2577-4fad-9b32-395eb0e5105b/ovn-controller/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.354206 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n2jhs_903dfb65-4299-4443-a62d-216c7e5a2953/openstack-network-exporter/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.564915 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q2ktf_befa2358-2851-4973-b1e2-6f003e9f1089/ovsdb-server-init/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.622989 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8f153807-258c-4fd9-a4d7-d16ab555b74f/nova-metadata-metadata/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.777162 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q2ktf_befa2358-2851-4973-b1e2-6f003e9f1089/ovsdb-server-init/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.866016 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q2ktf_befa2358-2851-4973-b1e2-6f003e9f1089/ovs-vswitchd/0.log" Oct 07 13:58:33 crc kubenswrapper[5024]: I1007 13:58:33.894844 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q2ktf_befa2358-2851-4973-b1e2-6f003e9f1089/ovsdb-server/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.115652 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jqrtx_a9b13355-d69c-4b53-972f-5d7014d5a81c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.477381 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcae764-2b6e-4119-b44c-64ddab7e5309/openstack-network-exporter/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.484367 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcae764-2b6e-4119-b44c-64ddab7e5309/ovn-northd/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.675781 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a72fb2e5-9365-4aea-854d-06997dde109c/openstack-network-exporter/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.681867 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a72fb2e5-9365-4aea-854d-06997dde109c/ovsdbserver-nb/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.903023 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_194f8a33-76cf-48a3-a4fc-0ff4eb701bb5/openstack-network-exporter/0.log" Oct 07 13:58:34 crc kubenswrapper[5024]: I1007 13:58:34.965948 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_194f8a33-76cf-48a3-a4fc-0ff4eb701bb5/ovsdbserver-sb/0.log" Oct 07 13:58:35 crc kubenswrapper[5024]: I1007 13:58:35.317582 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dccff77b6-pr5gt_a10c29d5-3a80-417b-90ec-9c2da32f3de0/placement-api/0.log" Oct 07 13:58:35 crc kubenswrapper[5024]: I1007 13:58:35.342321 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6dccff77b6-pr5gt_a10c29d5-3a80-417b-90ec-9c2da32f3de0/placement-log/0.log" Oct 07 13:58:35 crc kubenswrapper[5024]: I1007 13:58:35.519156 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6533041-d509-4740-9ed5-06cdf97e7340/setup-container/0.log" Oct 07 13:58:35 crc kubenswrapper[5024]: I1007 13:58:35.810657 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6533041-d509-4740-9ed5-06cdf97e7340/setup-container/0.log" Oct 07 13:58:35 crc kubenswrapper[5024]: I1007 13:58:35.842374 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6533041-d509-4740-9ed5-06cdf97e7340/rabbitmq/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.007719 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10cd989b-34e3-4e21-bb69-40115806b190/setup-container/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.292795 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10cd989b-34e3-4e21-bb69-40115806b190/setup-container/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.336170 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_10cd989b-34e3-4e21-bb69-40115806b190/rabbitmq/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.567051 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-6xmnp_a4d034c6-4fa0-487b-9c7c-bf13fd98a0a9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.604059 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vbk26_8db00627-6d2c-4acb-915e-413ed2590639/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.751856 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:58:36 crc kubenswrapper[5024]: E1007 13:58:36.752439 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:58:36 crc kubenswrapper[5024]: I1007 13:58:36.779172 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zkt2n_4c27b4b1-36a5-4e33-9098-35a524752868/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:37 crc kubenswrapper[5024]: I1007 13:58:37.026075 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9ljl7_e49b99f2-b6f5-4046-8e7e-c933dd49411b/ssh-known-hosts-edpm-deployment/0.log" Oct 07 13:58:37 crc kubenswrapper[5024]: I1007 13:58:37.193525 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c269d1e5-beee-4868-949d-eb84e0d44521/tempest-tests-tempest-tests-runner/0.log" Oct 07 13:58:37 crc kubenswrapper[5024]: I1007 13:58:37.355761 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_95da0c9c-e0dd-44db-bdd4-6d441e64762f/test-operator-logs-container/0.log" Oct 07 13:58:37 crc kubenswrapper[5024]: I1007 13:58:37.507580 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zdkqj_710c0046-3758-4f26-9513-b1064d858b9e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.916666 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.919416 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.927489 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.958764 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.960589 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:46 crc kubenswrapper[5024]: I1007 13:58:46.960690 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7hn\" (UniqueName: \"kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.062708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7hn\" (UniqueName: \"kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.062790 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.062949 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.063373 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.064308 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.088988 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7hn\" (UniqueName: \"kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn\") pod \"redhat-marketplace-t2rwb\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.245693 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.762115 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:58:47 crc kubenswrapper[5024]: I1007 13:58:47.952659 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerStarted","Data":"41d0b1cb8d081c0f5283d8d0df1a59a715f53a4dd1ebdce3762266ebe72a7bc3"} Oct 07 13:58:48 crc kubenswrapper[5024]: I1007 13:58:48.752038 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:58:48 crc kubenswrapper[5024]: E1007 13:58:48.752416 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:58:48 crc kubenswrapper[5024]: I1007 13:58:48.963097 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerID="1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b" exitCode=0 Oct 07 13:58:48 crc kubenswrapper[5024]: I1007 13:58:48.963174 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerDied","Data":"1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b"} Oct 07 13:58:49 crc kubenswrapper[5024]: I1007 13:58:49.977441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerStarted","Data":"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc"} Oct 07 13:58:50 crc kubenswrapper[5024]: I1007 13:58:50.989400 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerID="f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc" exitCode=0 Oct 07 13:58:50 crc kubenswrapper[5024]: I1007 13:58:50.989457 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerDied","Data":"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc"} Oct 07 13:58:52 crc kubenswrapper[5024]: I1007 13:58:52.011310 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerStarted","Data":"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f"} Oct 07 13:58:52 crc kubenswrapper[5024]: I1007 13:58:52.049954 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t2rwb" podStartSLOduration=3.387178306 podStartE2EDuration="6.049910949s" podCreationTimestamp="2025-10-07 13:58:46 +0000 UTC" firstStartedPulling="2025-10-07 13:58:48.964982618 +0000 UTC m=+5467.040769456" lastFinishedPulling="2025-10-07 13:58:51.627715261 +0000 UTC m=+5469.703502099" observedRunningTime="2025-10-07 13:58:52.03507835 +0000 UTC m=+5470.110865188" watchObservedRunningTime="2025-10-07 13:58:52.049910949 +0000 UTC m=+5470.125697787" Oct 07 13:58:55 crc kubenswrapper[5024]: I1007 13:58:55.050945 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_acb1b089-7f99-44d4-9e4d-4cb652ee6e21/memcached/0.log" Oct 07 13:58:57 crc kubenswrapper[5024]: I1007 13:58:57.246418 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:57 crc kubenswrapper[5024]: I1007 13:58:57.247124 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:57 crc kubenswrapper[5024]: I1007 13:58:57.309770 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:58 crc kubenswrapper[5024]: I1007 13:58:58.139202 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:58:58 crc kubenswrapper[5024]: I1007 13:58:58.190335 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:58:59 crc kubenswrapper[5024]: I1007 13:58:59.752235 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:58:59 crc kubenswrapper[5024]: E1007 13:58:59.752877 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.081185 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t2rwb" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="registry-server" containerID="cri-o://9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f" gracePeriod=2 Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.581386 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.676337 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities\") pod \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.676494 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb7hn\" (UniqueName: \"kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn\") pod \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.676728 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content\") pod \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\" (UID: \"8fb276a2-af1c-4fa0-9470-604c33a82e2f\") " Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.677674 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities" (OuterVolumeSpecName: "utilities") pod "8fb276a2-af1c-4fa0-9470-604c33a82e2f" (UID: "8fb276a2-af1c-4fa0-9470-604c33a82e2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.691919 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb276a2-af1c-4fa0-9470-604c33a82e2f" (UID: "8fb276a2-af1c-4fa0-9470-604c33a82e2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.779963 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:00 crc kubenswrapper[5024]: I1007 13:59:00.779999 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb276a2-af1c-4fa0-9470-604c33a82e2f-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.104405 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn" (OuterVolumeSpecName: "kube-api-access-qb7hn") pod "8fb276a2-af1c-4fa0-9470-604c33a82e2f" (UID: "8fb276a2-af1c-4fa0-9470-604c33a82e2f"). InnerVolumeSpecName "kube-api-access-qb7hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.106778 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerID="9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f" exitCode=0 Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.106823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerDied","Data":"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f"} Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.106855 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2rwb" event={"ID":"8fb276a2-af1c-4fa0-9470-604c33a82e2f","Type":"ContainerDied","Data":"41d0b1cb8d081c0f5283d8d0df1a59a715f53a4dd1ebdce3762266ebe72a7bc3"} Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.106875 5024 scope.go:117] "RemoveContainer" containerID="9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.107039 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2rwb" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.177381 5024 scope.go:117] "RemoveContainer" containerID="f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.184835 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.193327 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb7hn\" (UniqueName: \"kubernetes.io/projected/8fb276a2-af1c-4fa0-9470-604c33a82e2f-kube-api-access-qb7hn\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.204309 5024 scope.go:117] "RemoveContainer" containerID="1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.204736 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2rwb"] Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.250587 5024 scope.go:117] "RemoveContainer" containerID="9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f" Oct 07 13:59:01 crc kubenswrapper[5024]: E1007 13:59:01.251350 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f\": container with ID starting with 9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f not found: ID does not exist" containerID="9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.251411 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f"} err="failed to get container status \"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f\": rpc error: code = NotFound desc = could not find container \"9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f\": container with ID starting with 9bd9938659604d14b0e417572e25fd1db6f188a66ae3e792370f6a620c5d767f not found: ID does not exist" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.251445 5024 scope.go:117] "RemoveContainer" containerID="f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc" Oct 07 13:59:01 crc kubenswrapper[5024]: E1007 13:59:01.252371 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc\": container with ID starting with f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc not found: ID does not exist" containerID="f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.252477 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc"} err="failed to get container status \"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc\": rpc error: code = NotFound desc = could not find container \"f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc\": container with ID starting with f71dc88e654ad7bbfc56df73530a4c94c0f85def643ff6d263b180af069adccc not found: ID does not exist" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.252583 5024 scope.go:117] "RemoveContainer" containerID="1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b" Oct 07 13:59:01 crc kubenswrapper[5024]: E1007 13:59:01.253105 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b\": container with ID starting with 1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b not found: ID does not exist" containerID="1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b" Oct 07 13:59:01 crc kubenswrapper[5024]: I1007 13:59:01.253159 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b"} err="failed to get container status \"1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b\": rpc error: code = NotFound desc = could not find container \"1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b\": container with ID starting with 1b355deaff042f93bc48bb314663ce68da1197d4a9c937a8b80688ef9d63b56b not found: ID does not exist" Oct 07 13:59:02 crc kubenswrapper[5024]: I1007 13:59:02.771959 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" path="/var/lib/kubelet/pods/8fb276a2-af1c-4fa0-9470-604c33a82e2f/volumes" Oct 07 13:59:10 crc kubenswrapper[5024]: I1007 13:59:10.751877 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:59:10 crc kubenswrapper[5024]: E1007 13:59:10.752872 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:59:21 crc kubenswrapper[5024]: I1007 13:59:21.752579 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:59:21 crc kubenswrapper[5024]: E1007 13:59:21.753841 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:59:22 crc kubenswrapper[5024]: I1007 13:59:22.361354 5024 generic.go:334] "Generic (PLEG): container finished" podID="f5c14804-8a97-47ef-b141-4e55430ce37b" containerID="5f57a597e3474973ec28fbbb1d09a978f7fa851587302befe36192657fdfee56" exitCode=0 Oct 07 13:59:22 crc kubenswrapper[5024]: I1007 13:59:22.361432 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" event={"ID":"f5c14804-8a97-47ef-b141-4e55430ce37b","Type":"ContainerDied","Data":"5f57a597e3474973ec28fbbb1d09a978f7fa851587302befe36192657fdfee56"} Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.502776 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.562382 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-tfn6l"] Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.573259 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-tfn6l"] Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.672446 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host\") pod \"f5c14804-8a97-47ef-b141-4e55430ce37b\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.672695 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host" (OuterVolumeSpecName: "host") pod "f5c14804-8a97-47ef-b141-4e55430ce37b" (UID: "f5c14804-8a97-47ef-b141-4e55430ce37b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.672777 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgs65\" (UniqueName: \"kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65\") pod \"f5c14804-8a97-47ef-b141-4e55430ce37b\" (UID: \"f5c14804-8a97-47ef-b141-4e55430ce37b\") " Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.673458 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5c14804-8a97-47ef-b141-4e55430ce37b-host\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.682178 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65" (OuterVolumeSpecName: "kube-api-access-lgs65") pod "f5c14804-8a97-47ef-b141-4e55430ce37b" (UID: "f5c14804-8a97-47ef-b141-4e55430ce37b"). InnerVolumeSpecName "kube-api-access-lgs65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:59:23 crc kubenswrapper[5024]: I1007 13:59:23.777793 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgs65\" (UniqueName: \"kubernetes.io/projected/f5c14804-8a97-47ef-b141-4e55430ce37b-kube-api-access-lgs65\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.390057 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.390230 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-tfn6l" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559059 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:24 crc kubenswrapper[5024]: E1007 13:59:24.559545 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="registry-server" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559560 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="registry-server" Oct 07 13:59:24 crc kubenswrapper[5024]: E1007 13:59:24.559573 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="extract-content" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559579 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="extract-content" Oct 07 13:59:24 crc kubenswrapper[5024]: E1007 13:59:24.559637 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c14804-8a97-47ef-b141-4e55430ce37b" containerName="container-00" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559644 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c14804-8a97-47ef-b141-4e55430ce37b" containerName="container-00" Oct 07 13:59:24 crc kubenswrapper[5024]: E1007 13:59:24.559671 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="extract-utilities" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559678 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="extract-utilities" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559956 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb276a2-af1c-4fa0-9470-604c33a82e2f" containerName="registry-server" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.559982 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c14804-8a97-47ef-b141-4e55430ce37b" containerName="container-00" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.561765 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.579851 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.599646 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.599716 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.599955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgn8\" (UniqueName: \"kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: E1007 13:59:24.661701 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c14804_8a97_47ef_b141_4e55430ce37b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c14804_8a97_47ef_b141_4e55430ce37b.slice/crio-c95bd53b57973230cd91189420946212f2e1b29182afd7684f4c8b39071c74f3\": RecentStats: unable to find data in memory cache]" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.702253 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.702311 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.702420 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgn8\" (UniqueName: \"kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.702995 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.703032 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.733322 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgn8\" (UniqueName: \"kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8\") pod \"community-operators-2rpbh\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.831169 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c14804-8a97-47ef-b141-4e55430ce37b" path="/var/lib/kubelet/pods/f5c14804-8a97-47ef-b141-4e55430ce37b/volumes" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.908248 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.944777 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-sv2v4"] Oct 07 13:59:24 crc kubenswrapper[5024]: I1007 13:59:24.946056 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.114699 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.115252 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbfn\" (UniqueName: \"kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.218543 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.218704 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbfn\" (UniqueName: \"kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.218690 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.235858 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.251171 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbfn\" (UniqueName: \"kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn\") pod \"crc-debug-sv2v4\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.361937 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:25 crc kubenswrapper[5024]: W1007 13:59:25.391582 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e4f221d_6bda_4799_9d77_35e265a369d5.slice/crio-9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276 WatchSource:0}: Error finding container 9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276: Status 404 returned error can't find the container with id 9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276 Oct 07 13:59:25 crc kubenswrapper[5024]: I1007 13:59:25.414570 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerStarted","Data":"91c0113424900992855e290ffb7e5032fbc838dde5fbf17d156591992584b5a6"} Oct 07 13:59:26 crc kubenswrapper[5024]: I1007 13:59:26.427936 5024 generic.go:334] "Generic (PLEG): container finished" podID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerID="f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246" exitCode=0 Oct 07 13:59:26 crc kubenswrapper[5024]: I1007 13:59:26.428011 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerDied","Data":"f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246"} Oct 07 13:59:26 crc kubenswrapper[5024]: I1007 13:59:26.432415 5024 generic.go:334] "Generic (PLEG): container finished" podID="5e4f221d-6bda-4799-9d77-35e265a369d5" containerID="f172385825220194eb279eb7c52875e61acdd1dbf94e906743c0b7cf505ff46e" exitCode=0 Oct 07 13:59:26 crc kubenswrapper[5024]: I1007 13:59:26.432473 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" event={"ID":"5e4f221d-6bda-4799-9d77-35e265a369d5","Type":"ContainerDied","Data":"f172385825220194eb279eb7c52875e61acdd1dbf94e906743c0b7cf505ff46e"} Oct 07 13:59:26 crc kubenswrapper[5024]: I1007 13:59:26.432510 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" event={"ID":"5e4f221d-6bda-4799-9d77-35e265a369d5","Type":"ContainerStarted","Data":"9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276"} Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.463340 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerStarted","Data":"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b"} Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.623773 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.773647 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdbfn\" (UniqueName: \"kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn\") pod \"5e4f221d-6bda-4799-9d77-35e265a369d5\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.774243 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host\") pod \"5e4f221d-6bda-4799-9d77-35e265a369d5\" (UID: \"5e4f221d-6bda-4799-9d77-35e265a369d5\") " Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.774410 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host" (OuterVolumeSpecName: "host") pod "5e4f221d-6bda-4799-9d77-35e265a369d5" (UID: "5e4f221d-6bda-4799-9d77-35e265a369d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.775366 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e4f221d-6bda-4799-9d77-35e265a369d5-host\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.779606 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn" (OuterVolumeSpecName: "kube-api-access-zdbfn") pod "5e4f221d-6bda-4799-9d77-35e265a369d5" (UID: "5e4f221d-6bda-4799-9d77-35e265a369d5"). InnerVolumeSpecName "kube-api-access-zdbfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:59:27 crc kubenswrapper[5024]: I1007 13:59:27.877409 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdbfn\" (UniqueName: \"kubernetes.io/projected/5e4f221d-6bda-4799-9d77-35e265a369d5-kube-api-access-zdbfn\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:28 crc kubenswrapper[5024]: I1007 13:59:28.473963 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" Oct 07 13:59:28 crc kubenswrapper[5024]: I1007 13:59:28.473962 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-sv2v4" event={"ID":"5e4f221d-6bda-4799-9d77-35e265a369d5","Type":"ContainerDied","Data":"9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276"} Oct 07 13:59:28 crc kubenswrapper[5024]: I1007 13:59:28.474122 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ff592e13754c32f510828d682406a6f7e6964440efe8834b32ebfb53a1ad276" Oct 07 13:59:28 crc kubenswrapper[5024]: I1007 13:59:28.476055 5024 generic.go:334] "Generic (PLEG): container finished" podID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerID="97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b" exitCode=0 Oct 07 13:59:28 crc kubenswrapper[5024]: I1007 13:59:28.476090 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerDied","Data":"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b"} Oct 07 13:59:29 crc kubenswrapper[5024]: I1007 13:59:29.489204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerStarted","Data":"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7"} Oct 07 13:59:29 crc kubenswrapper[5024]: I1007 13:59:29.509884 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2rpbh" podStartSLOduration=3.029587944 podStartE2EDuration="5.509856918s" podCreationTimestamp="2025-10-07 13:59:24 +0000 UTC" firstStartedPulling="2025-10-07 13:59:26.43124959 +0000 UTC m=+5504.507036458" lastFinishedPulling="2025-10-07 13:59:28.911518594 +0000 UTC m=+5506.987305432" observedRunningTime="2025-10-07 13:59:29.504002958 +0000 UTC m=+5507.579789796" watchObservedRunningTime="2025-10-07 13:59:29.509856918 +0000 UTC m=+5507.585643766" Oct 07 13:59:34 crc kubenswrapper[5024]: I1007 13:59:34.908439 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:34 crc kubenswrapper[5024]: I1007 13:59:34.909673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:34 crc kubenswrapper[5024]: I1007 13:59:34.964792 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:35 crc kubenswrapper[5024]: I1007 13:59:35.592271 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:35 crc kubenswrapper[5024]: I1007 13:59:35.641623 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:35 crc kubenswrapper[5024]: I1007 13:59:35.751604 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:59:35 crc kubenswrapper[5024]: E1007 13:59:35.752174 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t95cr_openshift-machine-config-operator(273432b3-0436-4a74-afa3-7070f9bf5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" Oct 07 13:59:36 crc kubenswrapper[5024]: I1007 13:59:36.393094 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-sv2v4"] Oct 07 13:59:36 crc kubenswrapper[5024]: I1007 13:59:36.400486 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-sv2v4"] Oct 07 13:59:36 crc kubenswrapper[5024]: I1007 13:59:36.766672 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4f221d-6bda-4799-9d77-35e265a369d5" path="/var/lib/kubelet/pods/5e4f221d-6bda-4799-9d77-35e265a369d5/volumes" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.558701 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2rpbh" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="registry-server" containerID="cri-o://5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7" gracePeriod=2 Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.606292 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-r82f2"] Oct 07 13:59:37 crc kubenswrapper[5024]: E1007 13:59:37.606712 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4f221d-6bda-4799-9d77-35e265a369d5" containerName="container-00" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.606732 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4f221d-6bda-4799-9d77-35e265a369d5" containerName="container-00" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.606954 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4f221d-6bda-4799-9d77-35e265a369d5" containerName="container-00" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.607676 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.673600 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r4bf\" (UniqueName: \"kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.673905 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.776074 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r4bf\" (UniqueName: \"kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.776387 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.776553 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.812805 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r4bf\" (UniqueName: \"kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf\") pod \"crc-debug-r82f2\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:37 crc kubenswrapper[5024]: I1007 13:59:37.933641 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.527451 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.575284 5024 generic.go:334] "Generic (PLEG): container finished" podID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerID="5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7" exitCode=0 Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.575358 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerDied","Data":"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7"} Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.575398 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rpbh" event={"ID":"43b3a29c-74ea-427c-ab8c-2361a82676da","Type":"ContainerDied","Data":"91c0113424900992855e290ffb7e5032fbc838dde5fbf17d156591992584b5a6"} Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.575419 5024 scope.go:117] "RemoveContainer" containerID="5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.576345 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rpbh" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.579543 5024 generic.go:334] "Generic (PLEG): container finished" podID="ce4dd86f-044f-4225-9c78-c6e7adc662c0" containerID="a88e0bc2afbc308d5986f7d6ce2ea648a59fef1d5ffca7222648349f0e8abb4e" exitCode=0 Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.579592 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" event={"ID":"ce4dd86f-044f-4225-9c78-c6e7adc662c0","Type":"ContainerDied","Data":"a88e0bc2afbc308d5986f7d6ce2ea648a59fef1d5ffca7222648349f0e8abb4e"} Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.579617 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" event={"ID":"ce4dd86f-044f-4225-9c78-c6e7adc662c0","Type":"ContainerStarted","Data":"885587c303384416a47a370cff19b4d0aff762883139d0767935ee9b33154f42"} Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.599858 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities\") pod \"43b3a29c-74ea-427c-ab8c-2361a82676da\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.600394 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgn8\" (UniqueName: \"kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8\") pod \"43b3a29c-74ea-427c-ab8c-2361a82676da\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.601194 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities" (OuterVolumeSpecName: "utilities") pod "43b3a29c-74ea-427c-ab8c-2361a82676da" (UID: "43b3a29c-74ea-427c-ab8c-2361a82676da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.601806 5024 scope.go:117] "RemoveContainer" containerID="97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.602321 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content\") pod \"43b3a29c-74ea-427c-ab8c-2361a82676da\" (UID: \"43b3a29c-74ea-427c-ab8c-2361a82676da\") " Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.606768 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-utilities\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.609519 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8" (OuterVolumeSpecName: "kube-api-access-sqgn8") pod "43b3a29c-74ea-427c-ab8c-2361a82676da" (UID: "43b3a29c-74ea-427c-ab8c-2361a82676da"). InnerVolumeSpecName "kube-api-access-sqgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.626850 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-r82f2"] Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.635685 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jrhdn/crc-debug-r82f2"] Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.635999 5024 scope.go:117] "RemoveContainer" containerID="f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.657088 5024 scope.go:117] "RemoveContainer" containerID="5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7" Oct 07 13:59:38 crc kubenswrapper[5024]: E1007 13:59:38.658397 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7\": container with ID starting with 5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7 not found: ID does not exist" containerID="5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.658441 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7"} err="failed to get container status \"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7\": rpc error: code = NotFound desc = could not find container \"5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7\": container with ID starting with 5d493440149a9b8fd8efe92ea94ec9af9b9e68827525883026f9524b550c75c7 not found: ID does not exist" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.658468 5024 scope.go:117] "RemoveContainer" containerID="97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b" Oct 07 13:59:38 crc kubenswrapper[5024]: E1007 13:59:38.658852 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b\": container with ID starting with 97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b not found: ID does not exist" containerID="97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.658917 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b"} err="failed to get container status \"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b\": rpc error: code = NotFound desc = could not find container \"97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b\": container with ID starting with 97b75722ad6c59745a4eff36ea4ea540c0ba2870b66ef8578d9f67bb2188b29b not found: ID does not exist" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.658958 5024 scope.go:117] "RemoveContainer" containerID="f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246" Oct 07 13:59:38 crc kubenswrapper[5024]: E1007 13:59:38.659374 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246\": container with ID starting with f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246 not found: ID does not exist" containerID="f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.659407 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246"} err="failed to get container status \"f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246\": rpc error: code = NotFound desc = could not find container \"f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246\": container with ID starting with f4132ebcfaf5965911d3fbe6a3ea01199fe78bd391129e16cca249ed023a7246 not found: ID does not exist" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.661965 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b3a29c-74ea-427c-ab8c-2361a82676da" (UID: "43b3a29c-74ea-427c-ab8c-2361a82676da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.709430 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgn8\" (UniqueName: \"kubernetes.io/projected/43b3a29c-74ea-427c-ab8c-2361a82676da-kube-api-access-sqgn8\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.709467 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b3a29c-74ea-427c-ab8c-2361a82676da-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.909089 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:38 crc kubenswrapper[5024]: I1007 13:59:38.926597 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2rpbh"] Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.722592 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.833217 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host\") pod \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.833388 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host" (OuterVolumeSpecName: "host") pod "ce4dd86f-044f-4225-9c78-c6e7adc662c0" (UID: "ce4dd86f-044f-4225-9c78-c6e7adc662c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.833425 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r4bf\" (UniqueName: \"kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf\") pod \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\" (UID: \"ce4dd86f-044f-4225-9c78-c6e7adc662c0\") " Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.834068 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce4dd86f-044f-4225-9c78-c6e7adc662c0-host\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.844961 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf" (OuterVolumeSpecName: "kube-api-access-9r4bf") pod "ce4dd86f-044f-4225-9c78-c6e7adc662c0" (UID: "ce4dd86f-044f-4225-9c78-c6e7adc662c0"). InnerVolumeSpecName "kube-api-access-9r4bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 13:59:39 crc kubenswrapper[5024]: I1007 13:59:39.935955 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r4bf\" (UniqueName: \"kubernetes.io/projected/ce4dd86f-044f-4225-9c78-c6e7adc662c0-kube-api-access-9r4bf\") on node \"crc\" DevicePath \"\"" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.356231 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-6tzwp_39116473-582b-4f61-b3c8-44ab955c277b/kube-rbac-proxy/0.log" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.371667 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-6tzwp_39116473-582b-4f61-b3c8-44ab955c277b/manager/0.log" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.537355 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/util/0.log" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.611063 5024 scope.go:117] "RemoveContainer" containerID="a88e0bc2afbc308d5986f7d6ce2ea648a59fef1d5ffca7222648349f0e8abb4e" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.611168 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/crc-debug-r82f2" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.763637 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" path="/var/lib/kubelet/pods/43b3a29c-74ea-427c-ab8c-2361a82676da/volumes" Oct 07 13:59:40 crc kubenswrapper[5024]: I1007 13:59:40.764562 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4dd86f-044f-4225-9c78-c6e7adc662c0" path="/var/lib/kubelet/pods/ce4dd86f-044f-4225-9c78-c6e7adc662c0/volumes" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.272884 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/pull/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.310969 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/util/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.321595 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/pull/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.483602 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/util/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.486199 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/pull/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.522731 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c9f12b4f159359add44c45f5a1e8b81a2ea36daf9526b034036324f56drtfbw_21da6c14-ee6e-408d-aecd-ffb8cdeebc99/extract/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.653823 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-646554d9b9-dzx9r_52744582-1aca-4f75-8dc3-337a19ab3fba/kube-rbac-proxy/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.735275 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-646554d9b9-dzx9r_52744582-1aca-4f75-8dc3-337a19ab3fba/manager/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.758244 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-8r8zb_75de1768-d533-460a-8397-012ef25ade39/kube-rbac-proxy/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.847202 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-8r8zb_75de1768-d533-460a-8397-012ef25ade39/manager/0.log" Oct 07 13:59:41 crc kubenswrapper[5024]: I1007 13:59:41.957687 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-dszfh_7df3da1a-3dc0-400e-a3a6-4878652ecfdc/kube-rbac-proxy/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.031525 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-dszfh_7df3da1a-3dc0-400e-a3a6-4878652ecfdc/manager/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.172820 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-zbddw_a1d2d630-766f-4486-b909-6f622bdc9748/kube-rbac-proxy/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.177629 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-zbddw_a1d2d630-766f-4486-b909-6f622bdc9748/manager/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.310597 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-qnjtl_f2285083-77e3-448b-b4f0-27adfb683e17/kube-rbac-proxy/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.376282 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-qnjtl_f2285083-77e3-448b-b4f0-27adfb683e17/manager/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.438957 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-7fsjz_0a3c98f6-0e02-4493-b3d6-f030d73ca3ac/kube-rbac-proxy/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.636398 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-xscld_8c026852-45e6-4a05-bd27-3af46438df69/kube-rbac-proxy/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.667132 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-xscld_8c026852-45e6-4a05-bd27-3af46438df69/manager/0.log" Oct 07 13:59:42 crc kubenswrapper[5024]: I1007 13:59:42.722408 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-7fsjz_0a3c98f6-0e02-4493-b3d6-f030d73ca3ac/manager/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.328796 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-rwftw_be793dd5-2676-4289-961a-9e6c0731b13a/kube-rbac-proxy/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.419330 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-rwftw_be793dd5-2676-4289-961a-9e6c0731b13a/manager/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.530166 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-l6h5v_55b45b25-e171-4e43-8da0-b18c06e7515b/kube-rbac-proxy/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.635046 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-l6h5v_55b45b25-e171-4e43-8da0-b18c06e7515b/manager/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.648301 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4_4a2174c3-a953-4535-9b70-5414c07633c0/kube-rbac-proxy/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.707077 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-qc4t4_4a2174c3-a953-4535-9b70-5414c07633c0/manager/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.833419 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-gshfh_eeac9611-70f4-4fc6-a161-01420d358164/kube-rbac-proxy/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.928696 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rnxmp_f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a/kube-rbac-proxy/0.log" Oct 07 13:59:43 crc kubenswrapper[5024]: I1007 13:59:43.975347 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-gshfh_eeac9611-70f4-4fc6-a161-01420d358164/manager/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.120059 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-rnxmp_f15a5a3b-18f5-4fcf-8ecb-b9e31b144f6a/manager/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.132267 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-lwz42_62fe077d-cf16-4c42-95c3-39435d2c9042/kube-rbac-proxy/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.184381 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-lwz42_62fe077d-cf16-4c42-95c3-39435d2c9042/manager/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.307088 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb_c310d938-f1f0-4f85-90c3-f0625fc41848/manager/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.381654 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665cqffgb_c310d938-f1f0-4f85-90c3-f0625fc41848/kube-rbac-proxy/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.384883 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68c8546c8b-bc9s5_4d780f6f-e916-4f61-922f-bbaeceb4db7c/kube-rbac-proxy/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.622621 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-744d6c8d8d-zk24r_f42206e4-8007-40cd-9a9d-264867871e2c/kube-rbac-proxy/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.828521 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8jtv2_32e9e6cb-6cd0-41d6-8e30-1b91e5020ac3/registry-server/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.836371 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-744d6c8d8d-zk24r_f42206e4-8007-40cd-9a9d-264867871e2c/operator/0.log" Oct 07 13:59:44 crc kubenswrapper[5024]: I1007 13:59:44.837744 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-dgn5z_3fa4af87-32c4-423c-98c1-9cb8b7db5da2/kube-rbac-proxy/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.058305 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-26sbg_b1d7d818-5c22-4f23-9d2e-0459f36de335/kube-rbac-proxy/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.097064 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-dgn5z_3fa4af87-32c4-423c-98c1-9cb8b7db5da2/manager/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.152232 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-26sbg_b1d7d818-5c22-4f23-9d2e-0459f36de335/manager/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.335390 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-sfklm_01b8fb04-a40d-4e7b-be35-25f4450ec199/operator/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.360796 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-mgxlm_eef43156-170e-4dd4-abf1-77fa4763c4b8/kube-rbac-proxy/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.475988 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-mgxlm_eef43156-170e-4dd4-abf1-77fa4763c4b8/manager/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.596732 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-rcqrk_d39b8507-a457-4bdb-95ce-e20abf48c406/kube-rbac-proxy/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.756459 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-rcqrk_d39b8507-a457-4bdb-95ce-e20abf48c406/manager/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.873612 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-tb2fm_29940c9b-e33d-432a-86e1-e552ce1cefdd/kube-rbac-proxy/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.877120 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-tb2fm_29940c9b-e33d-432a-86e1-e552ce1cefdd/manager/0.log" Oct 07 13:59:45 crc kubenswrapper[5024]: I1007 13:59:45.942714 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68c8546c8b-bc9s5_4d780f6f-e916-4f61-922f-bbaeceb4db7c/manager/0.log" Oct 07 13:59:46 crc kubenswrapper[5024]: I1007 13:59:46.067813 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-rbwnk_5d4c59de-cd84-49b2-b320-3217d5cc31f3/manager/0.log" Oct 07 13:59:46 crc kubenswrapper[5024]: I1007 13:59:46.096506 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-rbwnk_5d4c59de-cd84-49b2-b320-3217d5cc31f3/kube-rbac-proxy/0.log" Oct 07 13:59:50 crc kubenswrapper[5024]: I1007 13:59:50.752127 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 13:59:51 crc kubenswrapper[5024]: I1007 13:59:51.745377 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"89d8ab8230c738cc3a8301230394a490874c2e0c066dfe5b732600ea29f417b2"} Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.149207 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw"] Oct 07 14:00:00 crc kubenswrapper[5024]: E1007 14:00:00.150317 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="extract-utilities" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150336 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="extract-utilities" Oct 07 14:00:00 crc kubenswrapper[5024]: E1007 14:00:00.150359 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="registry-server" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150364 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="registry-server" Oct 07 14:00:00 crc kubenswrapper[5024]: E1007 14:00:00.150389 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="extract-content" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150396 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="extract-content" Oct 07 14:00:00 crc kubenswrapper[5024]: E1007 14:00:00.150418 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4dd86f-044f-4225-9c78-c6e7adc662c0" containerName="container-00" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150425 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4dd86f-044f-4225-9c78-c6e7adc662c0" containerName="container-00" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150602 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b3a29c-74ea-427c-ab8c-2361a82676da" containerName="registry-server" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.150619 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4dd86f-044f-4225-9c78-c6e7adc662c0" containerName="container-00" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.151389 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.154499 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.154754 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.157665 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.157750 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.158018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gqv\" (UniqueName: \"kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.161928 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw"] Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.260038 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.260225 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.260291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gqv\" (UniqueName: \"kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.261032 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.278896 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.279016 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gqv\" (UniqueName: \"kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv\") pod \"collect-profiles-29330760-lpxkw\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.484985 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:00 crc kubenswrapper[5024]: I1007 14:00:00.963817 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw"] Oct 07 14:00:01 crc kubenswrapper[5024]: I1007 14:00:01.872582 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" event={"ID":"dcf2a65c-dda2-472d-94a8-071f57793310","Type":"ContainerDied","Data":"9755da2cb451231eacb39179808295bb87857518f6e342b2fd4b90570fb161d8"} Oct 07 14:00:01 crc kubenswrapper[5024]: I1007 14:00:01.872403 5024 generic.go:334] "Generic (PLEG): container finished" podID="dcf2a65c-dda2-472d-94a8-071f57793310" containerID="9755da2cb451231eacb39179808295bb87857518f6e342b2fd4b90570fb161d8" exitCode=0 Oct 07 14:00:01 crc kubenswrapper[5024]: I1007 14:00:01.873027 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" event={"ID":"dcf2a65c-dda2-472d-94a8-071f57793310","Type":"ContainerStarted","Data":"63b908f9876086c72d649fb0e64e46dd341efa6d6efd66a204eb9be02880193b"} Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.330257 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.449376 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume\") pod \"dcf2a65c-dda2-472d-94a8-071f57793310\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.449667 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume\") pod \"dcf2a65c-dda2-472d-94a8-071f57793310\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.449775 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6gqv\" (UniqueName: \"kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv\") pod \"dcf2a65c-dda2-472d-94a8-071f57793310\" (UID: \"dcf2a65c-dda2-472d-94a8-071f57793310\") " Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.450501 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcf2a65c-dda2-472d-94a8-071f57793310" (UID: "dcf2a65c-dda2-472d-94a8-071f57793310"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.553603 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcf2a65c-dda2-472d-94a8-071f57793310-config-volume\") on node \"crc\" DevicePath \"\"" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.895296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" event={"ID":"dcf2a65c-dda2-472d-94a8-071f57793310","Type":"ContainerDied","Data":"63b908f9876086c72d649fb0e64e46dd341efa6d6efd66a204eb9be02880193b"} Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.895346 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b908f9876086c72d649fb0e64e46dd341efa6d6efd66a204eb9be02880193b" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.895359 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29330760-lpxkw" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.993269 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv" (OuterVolumeSpecName: "kube-api-access-k6gqv") pod "dcf2a65c-dda2-472d-94a8-071f57793310" (UID: "dcf2a65c-dda2-472d-94a8-071f57793310"). InnerVolumeSpecName "kube-api-access-k6gqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 14:00:03 crc kubenswrapper[5024]: I1007 14:00:03.993506 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcf2a65c-dda2-472d-94a8-071f57793310" (UID: "dcf2a65c-dda2-472d-94a8-071f57793310"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 14:00:04 crc kubenswrapper[5024]: I1007 14:00:04.061941 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcf2a65c-dda2-472d-94a8-071f57793310-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 07 14:00:04 crc kubenswrapper[5024]: I1007 14:00:04.062004 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6gqv\" (UniqueName: \"kubernetes.io/projected/dcf2a65c-dda2-472d-94a8-071f57793310-kube-api-access-k6gqv\") on node \"crc\" DevicePath \"\"" Oct 07 14:00:04 crc kubenswrapper[5024]: I1007 14:00:04.419169 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm"] Oct 07 14:00:04 crc kubenswrapper[5024]: I1007 14:00:04.425988 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29330715-p7htm"] Oct 07 14:00:04 crc kubenswrapper[5024]: I1007 14:00:04.764026 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45edbff-f53e-4b87-8a17-85618fdbfc3a" path="/var/lib/kubelet/pods/c45edbff-f53e-4b87-8a17-85618fdbfc3a/volumes" Oct 07 14:00:06 crc kubenswrapper[5024]: I1007 14:00:06.750401 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7fw2t_0abe2fa8-3512-46b2-a738-682a833ae488/control-plane-machine-set-operator/0.log" Oct 07 14:00:06 crc kubenswrapper[5024]: I1007 14:00:06.926876 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wp2gk_6047f06e-4b55-4a39-be9c-6341c8cf7082/kube-rbac-proxy/0.log" Oct 07 14:00:06 crc kubenswrapper[5024]: I1007 14:00:06.990857 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wp2gk_6047f06e-4b55-4a39-be9c-6341c8cf7082/machine-api-operator/0.log" Oct 07 14:00:20 crc kubenswrapper[5024]: I1007 14:00:20.986725 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hbtxm_9d627bc4-230f-466a-98af-87483cb62404/cert-manager-controller/0.log" Oct 07 14:00:21 crc kubenswrapper[5024]: I1007 14:00:21.142759 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-sx9fb_b7e175c1-168b-4ddc-8139-e7a758af32fb/cert-manager-cainjector/0.log" Oct 07 14:00:21 crc kubenswrapper[5024]: I1007 14:00:21.189671 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mv7s2_d263f64b-97fc-41bb-9c16-467f65ebe30d/cert-manager-webhook/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.136542 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-75bxq_5ff62656-a8c7-4f5e-a6ea-6e3324d284ef/nmstate-console-plugin/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.302295 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-x47wd_1c8be1f9-a445-4dfd-9ad0-9c8b222e139e/nmstate-handler/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.319398 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-z7l7l_b466eea2-b593-4881-9b9e-af8b75bdead1/kube-rbac-proxy/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.345428 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-z7l7l_b466eea2-b593-4881-9b9e-af8b75bdead1/nmstate-metrics/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.523394 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-4f6sz_bbb0788e-db6f-48ac-aaab-b61da783d4a1/nmstate-operator/0.log" Oct 07 14:00:36 crc kubenswrapper[5024]: I1007 14:00:36.583485 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-szjk6_b253e4aa-8785-427b-bdf2-d2efa0af3671/nmstate-webhook/0.log" Oct 07 14:00:50 crc kubenswrapper[5024]: I1007 14:00:50.715757 5024 scope.go:117] "RemoveContainer" containerID="dac1fbd5f3f0938d592c58223599db712b5427c6c4356cd050e9785dfc35bc94" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.014585 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-zmgnv_5da747bb-51b6-4f81-bb62-ee6f0a4849f9/kube-rbac-proxy/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.134736 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-zmgnv_5da747bb-51b6-4f81-bb62-ee6f0a4849f9/controller/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.204476 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-frr-files/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.443414 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-metrics/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.463881 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-reloader/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.463929 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-reloader/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.493092 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-frr-files/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.682863 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-frr-files/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.713032 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-metrics/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.742645 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-metrics/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.786698 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-reloader/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.905414 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-frr-files/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.920540 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-reloader/0.log" Oct 07 14:00:54 crc kubenswrapper[5024]: I1007 14:00:54.976932 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/cp-metrics/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.013319 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/controller/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.174186 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/frr-metrics/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.231162 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/kube-rbac-proxy-frr/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.242633 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/kube-rbac-proxy/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.423983 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/reloader/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.506816 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-4c7dl_8d820a8c-2e7c-4433-a11d-8694890a25c3/frr-k8s-webhook-server/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.684739 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-664cbcfb76-9l549_675166a8-2c18-4526-bb5f-84ef53f3fcd8/manager/0.log" Oct 07 14:00:55 crc kubenswrapper[5024]: I1007 14:00:55.923196 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67c46766d-n845m_135eccc1-c6b9-42f2-83f7-dd54c18c2ffc/webhook-server/0.log" Oct 07 14:00:56 crc kubenswrapper[5024]: I1007 14:00:56.457443 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rb2nn_85813e66-e855-404d-b428-6329919f1a42/kube-rbac-proxy/0.log" Oct 07 14:00:56 crc kubenswrapper[5024]: I1007 14:00:56.464277 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6frw_e1253450-f731-4770-b518-b8a4fa6138c5/frr/0.log" Oct 07 14:00:56 crc kubenswrapper[5024]: I1007 14:00:56.774806 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rb2nn_85813e66-e855-404d-b428-6329919f1a42/speaker/0.log" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.169159 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29330761-dv9vm"] Oct 07 14:01:00 crc kubenswrapper[5024]: E1007 14:01:00.170220 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf2a65c-dda2-472d-94a8-071f57793310" containerName="collect-profiles" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.170240 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf2a65c-dda2-472d-94a8-071f57793310" containerName="collect-profiles" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.170527 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf2a65c-dda2-472d-94a8-071f57793310" containerName="collect-profiles" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.171362 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.182277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29330761-dv9vm"] Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.204393 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.204634 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.204762 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9drq\" (UniqueName: \"kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.204909 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.307466 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.307645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.307716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.307766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9drq\" (UniqueName: \"kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.315233 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.318042 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.320669 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.328948 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9drq\" (UniqueName: \"kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq\") pod \"keystone-cron-29330761-dv9vm\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:00 crc kubenswrapper[5024]: I1007 14:01:00.503810 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:01 crc kubenswrapper[5024]: I1007 14:01:01.050009 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29330761-dv9vm"] Oct 07 14:01:02 crc kubenswrapper[5024]: I1007 14:01:02.518795 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330761-dv9vm" event={"ID":"819ce37b-e323-4b0b-9e99-89e2a80c231e","Type":"ContainerStarted","Data":"a25a073de882a938b911cf27adf71063ce51e4fb95a58ef282689c26fd9dddfa"} Oct 07 14:01:02 crc kubenswrapper[5024]: I1007 14:01:02.519223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330761-dv9vm" event={"ID":"819ce37b-e323-4b0b-9e99-89e2a80c231e","Type":"ContainerStarted","Data":"90b0d29f771c0b6af434feb9e6eb41768208d2a7c75f0abcb22048aa0154e108"} Oct 07 14:01:05 crc kubenswrapper[5024]: I1007 14:01:05.554835 5024 generic.go:334] "Generic (PLEG): container finished" podID="819ce37b-e323-4b0b-9e99-89e2a80c231e" containerID="a25a073de882a938b911cf27adf71063ce51e4fb95a58ef282689c26fd9dddfa" exitCode=0 Oct 07 14:01:05 crc kubenswrapper[5024]: I1007 14:01:05.554969 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330761-dv9vm" event={"ID":"819ce37b-e323-4b0b-9e99-89e2a80c231e","Type":"ContainerDied","Data":"a25a073de882a938b911cf27adf71063ce51e4fb95a58ef282689c26fd9dddfa"} Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.009768 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.160948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle\") pod \"819ce37b-e323-4b0b-9e99-89e2a80c231e\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.161329 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9drq\" (UniqueName: \"kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq\") pod \"819ce37b-e323-4b0b-9e99-89e2a80c231e\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.161765 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data\") pod \"819ce37b-e323-4b0b-9e99-89e2a80c231e\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.162982 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys\") pod \"819ce37b-e323-4b0b-9e99-89e2a80c231e\" (UID: \"819ce37b-e323-4b0b-9e99-89e2a80c231e\") " Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.168252 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq" (OuterVolumeSpecName: "kube-api-access-c9drq") pod "819ce37b-e323-4b0b-9e99-89e2a80c231e" (UID: "819ce37b-e323-4b0b-9e99-89e2a80c231e"). InnerVolumeSpecName "kube-api-access-c9drq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.168813 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "819ce37b-e323-4b0b-9e99-89e2a80c231e" (UID: "819ce37b-e323-4b0b-9e99-89e2a80c231e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.191249 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "819ce37b-e323-4b0b-9e99-89e2a80c231e" (UID: "819ce37b-e323-4b0b-9e99-89e2a80c231e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.230100 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data" (OuterVolumeSpecName: "config-data") pod "819ce37b-e323-4b0b-9e99-89e2a80c231e" (UID: "819ce37b-e323-4b0b-9e99-89e2a80c231e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.267065 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-config-data\") on node \"crc\" DevicePath \"\"" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.267134 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.267178 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819ce37b-e323-4b0b-9e99-89e2a80c231e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.267202 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9drq\" (UniqueName: \"kubernetes.io/projected/819ce37b-e323-4b0b-9e99-89e2a80c231e-kube-api-access-c9drq\") on node \"crc\" DevicePath \"\"" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.577771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29330761-dv9vm" event={"ID":"819ce37b-e323-4b0b-9e99-89e2a80c231e","Type":"ContainerDied","Data":"90b0d29f771c0b6af434feb9e6eb41768208d2a7c75f0abcb22048aa0154e108"} Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.578185 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b0d29f771c0b6af434feb9e6eb41768208d2a7c75f0abcb22048aa0154e108" Oct 07 14:01:07 crc kubenswrapper[5024]: I1007 14:01:07.577850 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29330761-dv9vm" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.425418 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/util/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.595743 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/util/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.619378 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/pull/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.660765 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/pull/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.823093 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/pull/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.845653 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/util/0.log" Oct 07 14:01:12 crc kubenswrapper[5024]: I1007 14:01:12.847676 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27xfks_38286d5e-c3d8-49a7-89f9-88e4ba1ed331/extract/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.016583 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-utilities/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.170880 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-content/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.173787 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-utilities/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.204423 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-content/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.368608 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-utilities/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.396219 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/extract-content/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.659952 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-utilities/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.845582 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-content/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.896253 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-content/0.log" Oct 07 14:01:13 crc kubenswrapper[5024]: I1007 14:01:13.938638 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-utilities/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.029845 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h2zjw_408f514a-005e-4955-bca5-de53bc46161b/registry-server/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.089394 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-content/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.114455 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/extract-utilities/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.381386 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/util/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.609581 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/pull/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.654539 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/pull/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.660837 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/util/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.876470 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/util/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.885442 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/extract/0.log" Oct 07 14:01:14 crc kubenswrapper[5024]: I1007 14:01:14.970744 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cgpzkr_13571b28-d44b-4e20-8e38-b6577b12fddf/pull/0.log" Oct 07 14:01:15 crc kubenswrapper[5024]: I1007 14:01:15.016758 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjz7t_f3b3a3ab-50e1-4fc2-942e-b860023a3bb5/registry-server/0.log" Oct 07 14:01:15 crc kubenswrapper[5024]: I1007 14:01:15.801537 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w7cd9_0680117f-db9e-4d13-b02d-8a851e374b1f/marketplace-operator/0.log" Oct 07 14:01:15 crc kubenswrapper[5024]: I1007 14:01:15.846475 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-utilities/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.023405 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.136914 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-utilities/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.154768 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.314974 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-utilities/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.320313 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.363982 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-utilities/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.577953 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zv7nb_637ca0b1-6e76-47ae-a1d0-373212a885fa/registry-server/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.620678 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-utilities/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.633276 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.653212 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.802075 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-content/0.log" Oct 07 14:01:16 crc kubenswrapper[5024]: I1007 14:01:16.853997 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/extract-utilities/0.log" Oct 07 14:01:17 crc kubenswrapper[5024]: I1007 14:01:17.517801 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2bsdt_b28dc7c2-402e-4f40-a836-86485f2bcb36/registry-server/0.log" Oct 07 14:02:13 crc kubenswrapper[5024]: I1007 14:02:13.720863 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 14:02:13 crc kubenswrapper[5024]: I1007 14:02:13.721773 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 14:02:43 crc kubenswrapper[5024]: I1007 14:02:43.720745 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 14:02:43 crc kubenswrapper[5024]: I1007 14:02:43.721756 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.720091 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.720875 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.720944 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.722250 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89d8ab8230c738cc3a8301230394a490874c2e0c066dfe5b732600ea29f417b2"} pod="openshift-machine-config-operator/machine-config-daemon-t95cr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.722363 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" containerID="cri-o://89d8ab8230c738cc3a8301230394a490874c2e0c066dfe5b732600ea29f417b2" gracePeriod=600 Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.965862 5024 generic.go:334] "Generic (PLEG): container finished" podID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerID="89d8ab8230c738cc3a8301230394a490874c2e0c066dfe5b732600ea29f417b2" exitCode=0 Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.965937 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerDied","Data":"89d8ab8230c738cc3a8301230394a490874c2e0c066dfe5b732600ea29f417b2"} Oct 07 14:03:13 crc kubenswrapper[5024]: I1007 14:03:13.966237 5024 scope.go:117] "RemoveContainer" containerID="667dc67911ef8a9bbb21e4465de117e7384fd0b2367564e976e01011d5e96599" Oct 07 14:03:14 crc kubenswrapper[5024]: I1007 14:03:14.981602 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" event={"ID":"273432b3-0436-4a74-afa3-7070f9bf5b3b","Type":"ContainerStarted","Data":"d9fb98ee269f27fbd21972b9f5c2d649ecf676f5668547ee02d50f908e50bab6"} Oct 07 14:03:43 crc kubenswrapper[5024]: I1007 14:03:43.346091 5024 generic.go:334] "Generic (PLEG): container finished" podID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerID="db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf" exitCode=0 Oct 07 14:03:43 crc kubenswrapper[5024]: I1007 14:03:43.346332 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jrhdn/must-gather-wc46w" event={"ID":"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84","Type":"ContainerDied","Data":"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf"} Oct 07 14:03:43 crc kubenswrapper[5024]: I1007 14:03:43.347313 5024 scope.go:117] "RemoveContainer" containerID="db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf" Oct 07 14:03:43 crc kubenswrapper[5024]: I1007 14:03:43.640676 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jrhdn_must-gather-wc46w_bd4bc398-eaaa-436a-8ab3-9b4fff8afb84/gather/0.log" Oct 07 14:03:50 crc kubenswrapper[5024]: I1007 14:03:50.889095 5024 scope.go:117] "RemoveContainer" containerID="5f57a597e3474973ec28fbbb1d09a978f7fa851587302befe36192657fdfee56" Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.397513 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jrhdn/must-gather-wc46w"] Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.398217 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jrhdn/must-gather-wc46w" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="copy" containerID="cri-o://3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2" gracePeriod=2 Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.406742 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jrhdn/must-gather-wc46w"] Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.844379 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jrhdn_must-gather-wc46w_bd4bc398-eaaa-436a-8ab3-9b4fff8afb84/copy/0.log" Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.845132 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.862085 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2t6\" (UniqueName: \"kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6\") pod \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.862338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output\") pod \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\" (UID: \"bd4bc398-eaaa-436a-8ab3-9b4fff8afb84\") " Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.869061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6" (OuterVolumeSpecName: "kube-api-access-cc2t6") pod "bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" (UID: "bd4bc398-eaaa-436a-8ab3-9b4fff8afb84"). InnerVolumeSpecName "kube-api-access-cc2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 07 14:03:52 crc kubenswrapper[5024]: I1007 14:03:52.969782 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc2t6\" (UniqueName: \"kubernetes.io/projected/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-kube-api-access-cc2t6\") on node \"crc\" DevicePath \"\"" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.061580 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" (UID: "bd4bc398-eaaa-436a-8ab3-9b4fff8afb84"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.071972 5024 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.472048 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jrhdn_must-gather-wc46w_bd4bc398-eaaa-436a-8ab3-9b4fff8afb84/copy/0.log" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.472613 5024 generic.go:334] "Generic (PLEG): container finished" podID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerID="3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2" exitCode=143 Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.472686 5024 scope.go:117] "RemoveContainer" containerID="3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.472736 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jrhdn/must-gather-wc46w" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.501417 5024 scope.go:117] "RemoveContainer" containerID="db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.588421 5024 scope.go:117] "RemoveContainer" containerID="3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2" Oct 07 14:03:53 crc kubenswrapper[5024]: E1007 14:03:53.588738 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2\": container with ID starting with 3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2 not found: ID does not exist" containerID="3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.588764 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2"} err="failed to get container status \"3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2\": rpc error: code = NotFound desc = could not find container \"3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2\": container with ID starting with 3021d067efb8d28ec027b33a68e895c3f0fd526757074b94d539b63e435226a2 not found: ID does not exist" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.588783 5024 scope.go:117] "RemoveContainer" containerID="db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf" Oct 07 14:03:53 crc kubenswrapper[5024]: E1007 14:03:53.589299 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf\": container with ID starting with db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf not found: ID does not exist" containerID="db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf" Oct 07 14:03:53 crc kubenswrapper[5024]: I1007 14:03:53.589362 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf"} err="failed to get container status \"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf\": rpc error: code = NotFound desc = could not find container \"db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf\": container with ID starting with db39f97a1663585755dc99ed6b3ac0fa01eca5ccc5a8f43e1c6e28c4c6bf6ecf not found: ID does not exist" Oct 07 14:03:54 crc kubenswrapper[5024]: I1007 14:03:54.762400 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" path="/var/lib/kubelet/pods/bd4bc398-eaaa-436a-8ab3-9b4fff8afb84/volumes" Oct 07 14:05:43 crc kubenswrapper[5024]: I1007 14:05:43.720312 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 14:05:43 crc kubenswrapper[5024]: I1007 14:05:43.720939 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 14:05:51 crc kubenswrapper[5024]: I1007 14:05:51.010377 5024 scope.go:117] "RemoveContainer" containerID="f172385825220194eb279eb7c52875e61acdd1dbf94e906743c0b7cf505ff46e" Oct 07 14:06:13 crc kubenswrapper[5024]: I1007 14:06:13.720924 5024 patch_prober.go:28] interesting pod/machine-config-daemon-t95cr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 07 14:06:13 crc kubenswrapper[5024]: I1007 14:06:13.721540 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t95cr" podUID="273432b3-0436-4a74-afa3-7070f9bf5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.835201 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmsg8"] Oct 07 14:06:30 crc kubenswrapper[5024]: E1007 14:06:30.836073 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="copy" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836084 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="copy" Oct 07 14:06:30 crc kubenswrapper[5024]: E1007 14:06:30.836099 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="gather" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836105 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="gather" Oct 07 14:06:30 crc kubenswrapper[5024]: E1007 14:06:30.836116 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819ce37b-e323-4b0b-9e99-89e2a80c231e" containerName="keystone-cron" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836123 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="819ce37b-e323-4b0b-9e99-89e2a80c231e" containerName="keystone-cron" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836671 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="gather" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836689 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="819ce37b-e323-4b0b-9e99-89e2a80c231e" containerName="keystone-cron" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.836707 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4bc398-eaaa-436a-8ab3-9b4fff8afb84" containerName="copy" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.837966 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.850129 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmsg8"] Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.946715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-utilities\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.946806 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-catalog-content\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:30 crc kubenswrapper[5024]: I1007 14:06:30.947215 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcjj\" (UniqueName: \"kubernetes.io/projected/36c2aad0-e403-4a0c-b210-3ff9e24a3622-kube-api-access-7kcjj\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.053097 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-utilities\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.053237 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-catalog-content\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.053480 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcjj\" (UniqueName: \"kubernetes.io/projected/36c2aad0-e403-4a0c-b210-3ff9e24a3622-kube-api-access-7kcjj\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.053740 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-utilities\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.053793 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c2aad0-e403-4a0c-b210-3ff9e24a3622-catalog-content\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.075881 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcjj\" (UniqueName: \"kubernetes.io/projected/36c2aad0-e403-4a0c-b210-3ff9e24a3622-kube-api-access-7kcjj\") pod \"certified-operators-kmsg8\" (UID: \"36c2aad0-e403-4a0c-b210-3ff9e24a3622\") " pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.178519 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:31 crc kubenswrapper[5024]: I1007 14:06:31.714323 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmsg8"] Oct 07 14:06:32 crc kubenswrapper[5024]: I1007 14:06:32.146235 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerStarted","Data":"86ec7c8e15bb690d93620cf3289c83bd4b48e58c09f4342e86d6bdd31bca7c40"} Oct 07 14:06:32 crc kubenswrapper[5024]: I1007 14:06:32.146588 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerStarted","Data":"28eed235d81d47174199da41407fe1a5e1055f43baf9363472214dbbfa0cf8ef"} Oct 07 14:06:33 crc kubenswrapper[5024]: I1007 14:06:33.163105 5024 generic.go:334] "Generic (PLEG): container finished" podID="36c2aad0-e403-4a0c-b210-3ff9e24a3622" containerID="86ec7c8e15bb690d93620cf3289c83bd4b48e58c09f4342e86d6bdd31bca7c40" exitCode=0 Oct 07 14:06:33 crc kubenswrapper[5024]: I1007 14:06:33.163165 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerDied","Data":"86ec7c8e15bb690d93620cf3289c83bd4b48e58c09f4342e86d6bdd31bca7c40"} Oct 07 14:06:33 crc kubenswrapper[5024]: I1007 14:06:33.167462 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 07 14:06:36 crc kubenswrapper[5024]: I1007 14:06:36.194188 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerStarted","Data":"a906ecee80a3c1b83a01f5d9e4eeb5f929996f953aef58b7c2f2ec980cd2f940"} Oct 07 14:06:37 crc kubenswrapper[5024]: I1007 14:06:37.218220 5024 generic.go:334] "Generic (PLEG): container finished" podID="36c2aad0-e403-4a0c-b210-3ff9e24a3622" containerID="a906ecee80a3c1b83a01f5d9e4eeb5f929996f953aef58b7c2f2ec980cd2f940" exitCode=0 Oct 07 14:06:37 crc kubenswrapper[5024]: I1007 14:06:37.218290 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerDied","Data":"a906ecee80a3c1b83a01f5d9e4eeb5f929996f953aef58b7c2f2ec980cd2f940"} Oct 07 14:06:39 crc kubenswrapper[5024]: I1007 14:06:39.249618 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmsg8" event={"ID":"36c2aad0-e403-4a0c-b210-3ff9e24a3622","Type":"ContainerStarted","Data":"43e927431998dfd093688ae7c10fbbc2bfe06afb91701b2c3b96d7027291fa5e"} Oct 07 14:06:39 crc kubenswrapper[5024]: I1007 14:06:39.273941 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmsg8" podStartSLOduration=4.032424245 podStartE2EDuration="9.273917094s" podCreationTimestamp="2025-10-07 14:06:30 +0000 UTC" firstStartedPulling="2025-10-07 14:06:33.167196168 +0000 UTC m=+5931.242983016" lastFinishedPulling="2025-10-07 14:06:38.408688987 +0000 UTC m=+5936.484475865" observedRunningTime="2025-10-07 14:06:39.26926321 +0000 UTC m=+5937.345050048" watchObservedRunningTime="2025-10-07 14:06:39.273917094 +0000 UTC m=+5937.349703932" Oct 07 14:06:41 crc kubenswrapper[5024]: I1007 14:06:41.179087 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:41 crc kubenswrapper[5024]: I1007 14:06:41.179581 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmsg8" Oct 07 14:06:41 crc kubenswrapper[5024]: I1007 14:06:41.267959 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmsg8"